最新刊期

    8 12 2003
    • Researches on Multimedia Technology in China, 2002

      Vol. 8, Issue 12, Pages: 1361(2003) DOI: 10.11834/jig.2003012493
      摘要:As one of a serial of reports, this paper is a survey on multimedia researches and applications in China, 2002. Since multimedia is a cross research area, papers about multimedia technology are distributed on various journals. We retrieved about 2761 papers published on 9 Chinese journals in 2002, from which we selected 464 ones on multimedia technologies and applications, then we made analyses on them, and the classified data were compared with that of 1998, 1999, 2000 and 2001. Looking into the data we can see that researchers in China pay more and more attention at digital watermarking, virtual reality, multimodal interface, multimedia data retrieval technology, quality of service, computer supported cooperative work, GIS and Digital Earth, which are also the international evolution trends in a quite long period. We present here an overview on the progress in multimedia technology in China, 2002. This will be convenient for researchers looking up references, and helpful for editors compiling journals and for authors contributing papers.  
        
      2700
      |
      200
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155706 false
      更新时间:2024-05-07
    • Surface Triangulations Based on 3D Arbitrary Point-sets

      Vol. 8, Issue 12, Pages: 1379(2003) DOI: 10.11834/jig.2003012495
      摘要:Surface triangulations based-on 3D arbitrary point-sets are widely applied in CAGD/CAD and reverse-engineering, etc. In the first place, this paper reviews two main methods in surface triangulations, named as plane-projection and direct triangulation. For the former, Delaunay triangulations are mainly enunciated. For the later, algorithm developed by B. K. Choi is particularized. Some typical algorithms are introduced in detail, as well as various data-structures built in these algorithms. Next, since the final result of triangulation is determined by the optimal criterion, some proverbial optimal criteria are specified and analyzed in this paper, and they are thoroughly compared with each other here through anatomizing an example. It is pointed that,in practical engineering, it is necessary to develop new algorithms with new criteria for triangulations of scattered points sampled from complicated surfaces so as to maintain the properties such as better smoothness and shape preserving. Finally the time and space complexities of various algorithms are briefly and concisely discussed, also the research trend of surface triangulations based-on 3D arbitrary point-sets.  
        
      3141
      |
      201
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155965 false
      更新时间:2024-05-07
    • Vol. 8, Issue 12, Pages: 1389(2003) DOI: 10.11834/jig.2003012496
      摘要:A geometrically based method of treatment planning system for gamma knife stereosurgery optimization was developed to find the best number of shots, shot locations and collimator sizes for treatment planning. The problem is similar to the problem of filling an arbitrary 3D object with spheres--the goal is that small spheres fill the sharp corners and large spheres fill more open regions so that the total number of spheres is minimized while the target is fully filled. In our approach, firstly the distance map was generated, then the end points of medial axis of the object were detected directly from distance map. Choose one of the end points as the first shot position, the sphere centered at the shot was removed from the target volume. The distance map of the remainder of the target was generated, the location of one of the cross points whose distance to the up shot is shortest was chosen as the current shot, the sphere centered at the shot was removed from the target. The process is repeated until the maximum distance value of the remainder target was less than 80% of half of the minimum collimator size (4 mm). The above steps were repeated until all end points were processed, thus binary trees corresponding to each start shot can be got to denote all possible solutions, and one of the path of all binary trees which maximized the measure of our objective was chosen as the initial optimization solution consists of the start points, the measure considered both the coverage percentage of tumor and that of normal tissue, also the sensitive of the coved normal tissue of each shot. The experimental results showe that the approach is more effective and less time consumed than the existing methods.  
        
      2593
      |
      173
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157207 false
      更新时间:2024-05-07
    • Vol. 8, Issue 12, Pages: 1395(2003) DOI: 10.11834/jig.2003012497
      摘要:The segmentation of the image of tongue is a prophase work to establish a system of automatic diagnosis by tongue features in Traditional Chinese Medicine (TCM). Only when favourable results of the segmentation of the image of tongue can be achieved, the next stage will be able to proceed effectively. Considering the causes above, the significance of the segmentation of the image of tongue is obvious. This paper presents here several methods for the segmentation of the image, especially for Split-Combining Algorithm. Then a new Mended-Split-Combining Algorithm is proposed, which is based on the traditional Split-Combining Algorithm. This new method proposed in this paper has its own advantages in several aspects, such as the standard of consistency, the algorithm speed(the new method's Time Complication Degree is O(n),while the traditional methods' are O(n(n 1)/2)) and the experiment results that the intuitionistic results can be seen. What's more, in this paper, some comparisons have been made between this new method and all those other traditional methods. From the comparison ,it can clearly be found that the Mended-Split-Combining Algorithm is better than any other algorithms. At last, this paper also discusses the deficiencies that still exist and offers some suggestions. On the whole, Experimental results are satisfactory.  
        
      3025
      |
      177
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157000 false
      更新时间:2024-05-07
    • Texture and Non-texture Images Retrieval Based on Wavelet Fuzzy Clustering

      Vol. 8, Issue 12, Pages: 1400(2003) DOI: 10.11834/jig.2003012498
      摘要:Content-based Image retrieval techniques are the research focuses in recent years. A texture and non-texture images retrieval algorithm is proposed in this paper. Firstly, we perform wavelet transform on the images; secondly we perform fuzzy clustering in the LL subband of the images according to the color and texture features; thirdly we can judge that the images are texture or non-texture by using the region segmentation, and extract different features, we extract the global energy features for texture images and the local energy features for non-texture images; then, according to the different similarity acriterion, we can get the similarity of texture images and non-texture images. For non-texture images, we must first get region similarity and relation between the regions of two images, and then we can get similarity between images. For texture images, we can directly get image similarity from the global energy features. The algorithm is confirmed by the test that can get good classify and retrieval performance.  
      关键词:Computer image processing;Content-based image retrieval;wavelet transform;fuzzy clustering;Semantic classification;similarity   
      2564
      |
      152
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56156803 false
      更新时间:2024-05-07
    • ECG Signal Compression Method Based on 2-D DCT and Subsection Encoding

      Vol. 8, Issue 12, Pages: 1406(2003) DOI: 10.11834/jig.2003012499
      摘要:In this paper, an ECG data compression method based on 2-D DCT transform has been proposed, which utilizes the fact that ECG signals have two types of redundancies—between adjacent samples and between adjacent heartbeats. Furthermore, the technology of the data realignment and the subsection encoding are proposed so as to speed up the coding rate. When the percent root-mean-square difference(PRD) is about 4.5, this method can get the coding rate of 10∶1 to 20∶1 and the computing complexity of 2.75 multiplicative operations and 7.25 additive operations per sample by the expemiment on MIT-BIHarrhythmia database data. This method can improve almost one time above the 2-D DCT method of H.Lee' s. And its coding rate is almost the same with the discrete wavelet method in M.Hilton and Z.Lu' s paper, but the calculator is much less.The compare indicates this method is a low-complexity and high-coding-rate method among the current ECG data compression methods.  
        
      2654
      |
      180
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56156617 false
      更新时间:2024-05-07
    • Image Registration Based on Hausdorff Distance

      Vol. 8, Issue 12, Pages: 1412(2003) DOI: 10.11834/jig.2003012500
      摘要:Image registration is an important step in image fusion. In this paper, a new automatic image registration method is presented. First, a small number of feature points are extracted in both images using a Gabor wavelet feature detector. Then, these feature points are matched and the affine transformation between the two images is obtained through a matching technique based on the Hausdorff distance. We choose feature points instead of edges of objects to search for the affine transformation so that the computation load can be decreased largely. On the same time, because the Hausdorff distance is a measure defined between two point sets and does not require to establish an explicit points correspondence between images, it can tolerate errors introduced by the presence of outlier points (noises) as well as the absence of some missing points. Consequently, this registration method can be applied to images with large misalignment. Experiments with synthetic and real images show that this algorithm is efficient.  
        
      3361
      |
      228
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158294 false
      更新时间:2024-05-07
    • An Efficient Random Algorithm for Lines Detection

      Vol. 8, Issue 12, Pages: 1418(2003) DOI: 10.11834/jig.2003012501
      摘要:Detecting lines from a digital image is very important in computer vision. In the HT-based method, due to the fact that the parameter space is quantified, the large computation-memory requirement is needed. Randomized Hough Transform (RHT) randomly selects two pixels from an edge image to solve parameters of a line and their corresponding mapped point in the parameter space is collected by voting on the accumulator implemented by an array. In this paper, an efficient randomized algorithm for detecting lines (RLD) in image is presented. In RLD, we first randomly select three edge pixels from an edge image and define a distance criterion to determine whether there is a possible line in the image, after find a possible line we apply an evidence-collecting process to further determine whether the line is true or not. Experiments demonstrate that the proposed algorithm is valid.  
        
      2540
      |
      190
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157498 false
      更新时间:2024-05-07
    • Vol. 8, Issue 12, Pages: 1422(2003) DOI: 10.11834/jig.2003012502
      摘要:Due to some inner and outer reseasons of image systerm, images which obtained by some sensors will be degraded unavoidably. As an effective technique to enhance the resolution and quality of images, Super-resolution technique is becoming a hot topic, which plays an important role in numerous applications of image processing and communication such as HDTV , target recognition ,medical image processing and so on. The main purpose of this paper is to present a feasible super resolution to improve the resolution and quality of images. A lot of approaches have been developed including interpolative method, which is a most simple and direct method of super resolution. Some classicial interpolative methods will be introduced and analysised briefly in the first part of this paper, and then wavelet thansform is shortly described in order to provide the convenience for obtaining a new interpolative scheme. Considering the powerful advantages of Bézier surface method, which is very precision and can be implemented fast, the novel method combined wavlet transform with Bézier surface is described in this paper. This method has two merits at least, first it overcomes the shortcoming of traditional methods which are usually cause the degrading for image details, at the same time, this method is not complex due to the fast implementation of the Bézier surface interpolation. The experimental results show the computational feasibility of this method.  
      关键词:Computer image processing;super resolution;Image interpolation;wavelet transform;Bézier surface   
      2573
      |
      211
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56156539 false
      更新时间:2024-05-07
    • The Design & Implement of Resampling in Ray Casting Algorithm

      Vol. 8, Issue 12, Pages: 1427(2003) DOI: 10.11834/jig.2003012504
      摘要:Volume rendering has wide applications in some areas such as medical imaging and scientific visualization. But the algorithm of volume rendering can not be still put into use in practical medical imaging applications, simply because it could not meet the speed requirement for interactive operation due to the large amount computation involved. At present, many researchers are working hard to study the foundation of accelerating algorithms. In order to resolve the volume rendering speed problem in 3D visualization of medical image, this paper analyses the theory of the 3D Regular Data Set' resampling and introduces the way to realize it in details. Since the shapes of some object to be reconstructed are close to sphere, this paper presents a new data Set' resampling method, which adopts the round box depending on the object. The results of the 3D visualization, which are about the human heads and eyes, are given in this paper. The experiment result shows that method effectively reduces the calculating time, and at the same time makes the calculation of the point of intersection more simply.  
        
      2685
      |
      198
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56158190 false
      更新时间:2024-05-07
    • Parallel Rendering Based on Optical-mapping Virtual Objects

      Vol. 8, Issue 12, Pages: 1432(2003) DOI: 10.11834/jig.2003012505
      摘要:Current rendering algorithms of global illumination models can generate photo-realistic images, but the high computing cost may discourage their applications in walk-through and virtual reality etc. Especially, rendering specular reflections and refractions is always a problem in computer graphics community. In this paper, virtual optical-mapping objects are introduced, and a unified method for generating virtual objects of reflections and refractions is proposed. The virtual objects are treated in the same way as real objects in the scene, so the reflection and refraction images can be rendered by the graphical hardware. Cluster based connected PCs is utilizes as parallel platform, and task scheme and load balance are discussed. Virtual objects are generated by CPU and rendered by graphical hardware. So, this method utilizes simultaneously both CPUs and graphical hardware on nodes of clusters to achieve high performance rendering. In the last, examples demonstrate this method is powerful for interactive applications such as real-time walk-throughs in buildings, animation and virtual reality.  
        
      2858
      |
      170
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56156846 false
      更新时间:2024-05-07
    • Vol. 8, Issue 12, Pages: 1438(2003) DOI: 10.11834/jig.2003012507
      摘要:Rendering speed is a key technology of volume rendering. In order to accelerate rendering process, we present a novel imaging acceleration approach based on Intel SIMD and segmentation technologies, which obtain significant speedup without degrading image quality. Only applying Intel SIMD techniques, we can obtain the rendering speed 2~5 times faster than brute force ray casting. SIMD techniques are combined with threshold segmentation to skip large empty samplings, and the rendering speed is further more improved. Because the time of threshold segmenting a large data set is very short, the algorithm can quickly display the rendered image once the threshold values are changed. Experiments have been done on a single processor P4/1.6G PC, and it is 10 more times faster than brute force ray casting and about 1 to 3 frames per second is obtained with the image size 512×512. Our algorithm has some advantages that rendering speed can be greatly accelerated without any specialized purpose hardware and time-consuming preprocessing, and images quality is very high.  
      关键词:Computer image processing;Single-Instraction Muttiple-Date;MMX/SSE/SSE2;visualization;ray casting;volume rendering   
      2538
      |
      179
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56156908 false
      更新时间:2024-05-07
    • Vol. 8, Issue 12, Pages: 1444(2003) DOI: 10.11834/jig.2003012508
      摘要:In complex virtual environment, where there are massive moving objects, collision detection would become the bottle-neck of system performance. To promote the computation efficiency in such case, a fast N-body collision detection algorithm, USSCD, is proposed, which is based on uniform spatial subdivision. In this algorithm, the computation complexity is reduced with a hybrid scheme, first, the object space is uniformly subdivided into a series of voxels; then, collision detection, based on the scheme of sorting-based sweep and prune, is performed within each voxel. Based on distribution density of objects, an optimal method is proposed to compute the size of voxels in uniform space subdivision, for a special class of collision detection algorithms, this method can lead to minimum computation complexity. USSCD was implemented, and compared with I-COLLIDE through a serial of tests. The results show that USSCD is superior in performance when massive objects are uniformly distributed. Moreover, the performance of USSCD is more stable than that of I-COLLIDE in consideration of variable correlation between objects.  
        
      2392
      |
      203
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157394 false
      更新时间:2024-05-07
    • Performance Analysis of H.26L Video Coding Algorithm

      Vol. 8, Issue 12, Pages: 1450(2003) DOI: 10.11834/jig.2003012510
      摘要:H.26L is the next generation of video coding standard, it aims at higher coding efficiency and is developed by ITU-T and MPEG as a long-term standard. Some complicated techniques, such as spatial prediction in intra coding, integer cosine transformation (ICT), adaptive block size motion compensation, Adaptive Motion Accuracy (AMA), multiple reference frame prediction, universal variable length coding and content adaptive binary arithmetic coding (CABAC), these coding modes all use the H.26L standard. In order to estimate each method' s contributes to the performance, detailed simulation and analysis are provided in this paper. Simulation results show that CABAC can provide a fairly consistent improvement in coding efficiency of between 5% and 10%. How much coding gains that can be achieved by using multiple reference frame prediction highly depend on source content. As for the benefit of eighth-pixel motion estimation accuracy, we find it is only beneficial at high resolutions and high bit rates, and also contain high spatial detail. Finally, the use of different block sizes also provides a consistent improvement, averaging 16% bit savings if all block types are used versus using the 16×16 mode only. However, using 8×8 and larger blocks can capture most of the benefit of the different block sizes, although the smaller blocks become more beneficial as the bit rate is increased. These results will help people to perform further improvements and optimization to the algorithm.  
        
      3147
      |
      190
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155887 false
      更新时间:2024-05-07
    • Vol. 8, Issue 12, Pages: 1457(2003) DOI: 10.11834/jig.2003012511
      摘要:In the error-prone environments like wireless channels, bits error or/and packets loss is unavoidable. So error concealment is becoming an important and necessary technology for decoder. As most error concealment schemes are all based on the neighboring spatial information, the probability of that surrounding macroblocks are all lost must be decreased. Address this question, This paper studies the limitation of the raster scan order that adopted by tradition video data packeted algorithm, then considers scatter macroblocks and helix scan order. Combining these two schemes, a new video data packet algorithm is proposed. In this scheme, macroblocks are assigned to different packets following a special scatter pattern. Then they are packeted by helix scan order from the visual central area. Simulation results show this algorithm not only can make compressed video data more robust, but also make the video stream be scalable.  
        
      2183
      |
      186
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155621 false
      更新时间:2024-05-07
    • The Implementing of Browsing Key-frames of Video Based on DVB

      Vol. 8, Issue 12, Pages: 1462(2003) DOI: 10.11834/jig.2003012513
      摘要:With developing of computer technology and TV technology, the amount of digital video program is larger and larger. In order to find out the content of the video program rapidly, a system of browsing video key-frames based on DVB(digital video broadcasting) is presented in the paper. Firstly, the video sequences are segmented into shots directly by making use of compressed parameters provided by MPEG compressed video, such as DCT coefficients, motion vector, the types of macroblock and so on. Secondly, the first I-frame is selected as key-frame, and DC image of the I-frame is reconstructed. Finally, based on the standard of DVB, the SI tables are extended to realize the data structure of encapsulating the key-frames so that the key-frames of video in TV stations can be picked-up and transferred. The system diagram and a real example of rapid browsing prototype based on DC image of key-frame are offered in the last of the paper. Because the method that presented in the paper makes use of the compressed parameters provided by MPEG compressed video and reducing the cost of decompressed, the method has the feature of lower computing cost and faster browsing.  
        
      2302
      |
      195
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155674 false
      更新时间:2024-05-07
    • Vol. 8, Issue 12, Pages: 1467(2003) DOI: 10.11834/jig.2003012514
      摘要:In AFIS (Automated Fingerprint Identification System), the gray fingerprint image is transformed into a thinned binary image in which there are a mass of false features that will affect the follow-up classification and verification and reduce the identification rate of the system. In this paper, a new fast ridge tracing algorithm, 8-neighbour coding ridge tracing, is proposed firstly. Then a post-processing algorithm is presented for eliminating the false features based on local structural information and the attributes of the features that are obtained by the tracking algorithm. Pseudo-structures in fingerprints, such as short lines, break ridges, spurs, forks, isolated islands, bridges, interlinked islands and triangles, can be identified and eliminated by the approach. The validity of the method is confirmed by experiments conducted in the paper.  
        
      2612
      |
      178
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56157075 false
      更新时间:2024-05-07
    • A New Algorithm of Line Clipping for Convex Polygons

      Vol. 8, Issue 12, Pages: 1475(2003) DOI: 10.11834/jig.2003012515
      摘要:A new algorithm of line clipping for convex polygons with n edges is proposed. Comparing the new algorithm and Cyrus-Beck algorithm, if n is sufficiently large, the number of multiplications used by the new algorithm is one third of the ones used by Cyrus-Beck algorithmand the new algorithm only uses 4 divisions for any convex polygons. Hence,the new algorithm is faster than Cyrus-Beck algorithm.  
        
      3330
      |
      198
      |
      0
      <HTML>
      <L-PDF><Meta-XML>
      <引用本文> <批量引用> 56155932 false
      更新时间:2024-05-07
    0