Advanced Materials Research Vols. 846-847

Paper Title Page

Abstract: 3D laser scanning technology can fast, efficient and accurate access high-precision measurement of the target point cloud data and provides the necessary conditions for the development of digital measuring. This paper gives an example of Subway and elaborates the method of 3D point cloud data acquisition, data processing and modeling, and verifies the feasibility of 3D visualization of Subway based on 3D laser.
981
Abstract: Boundary localization is one of the key issues for reliable iris recognition system. For degraded iris images, some frequently-occurred cases such as dominant texture patterns, eyelashes or eyelids occlusions, low contrast between iris and sclera, and pupil deviation, will lead to inaccurate boundary localization. Specifically, if the intensive transition from iris to sclera is too smooth, outer boundary localization will be very difficult. To stress the problem, in this paper the boundary localization method is proposed in which nonlinear gray-level transformation is innovated in outer boundary localization process. The experimental results depict that our algorithm have improved the localization accuracy for degraded iris compared to the classical algorithms.
986
Abstract: A new impulse noise suppression method by median filtering with parity extraction was proposed in this paper. The window size of the median filter has important effect on the performance of the filtering result, larger window size can suppress impulse noise effectively but often at cost of loss of the detail information of the signal, while smaller window size can protect the detail information better but results in degrading of the noise suppression. Parity extraction is done to the signal at first and median filtering carries on the odd and even part respectively, and then a new method of median filtering with short window size to suppress the impulse noise is obtained. Simulation and experiment data of telemetry process results show the effectiveness of the proposed method.
991
Abstract: A large number of data have been accumulated in our country during the long-term investigation and statistics of forestry resources, and it has become key problem to find out the relationship of the environment and forest growth from the large number of existing forestry resources data. In this paper, the data mining technology is used in planning and design of forestry resource, and the process of data mining is studied, considering data mining and design process of forestry resource decision, we firstly study data mining technology, then collect the data and perform data processing, select the algorithm of data mining, and establish Bayesian evaluation model, then explain and analyze the decision analysis of forestry results. Practice has proved that data mining methods have improved the accuracy and reliability of decision making of forestry management, and provided new theory, methods and technology for research of growth management of forestry.
995
Abstract: We gain the scale-related characterization of the original image using the discrete wavelet transform. The boundary information of the image target is fused by the wavelet coefficients of the correlation between wavelet transform layer, which to increase the pixel resolution scale. We apply the inter-scale fusion method to gain fusion coefficient of the fine-scale, which take into account the detail of the image and approximate information, which the fusion coefficient are referred to as the weighting operator and to establish the boundary energy function. In the halftone process, each clustering uses the weighted least-squares method through energy minimization via Direct Binary Search algorithm, which to gain halftoning image. Simulation results on typical test images further confirm the performance of the new approach.
999
Abstract: An intensity statistics based graph cut segmentation algorithm is proposed in this paper to improve the accuracy and adaptive capacity of liver segmentation. The proposed segmentation method consists of four steps as follows: First, combined with the Otsu algorithm and associated with a cropped liver image, we defined a gray interval as the livers intensity range. Second, the fuzzy c-means clustering algorithm was applied to compute the average intensity and the variance. Third, we establish the cost function with the statistic results. Finally, we employed the improved graph cut model to extract the liver parenchyma from a large cross-section liver image. Experimental results show that the proposed segmentation method is feasible for different liver images of different intensity statistics.
1003
Abstract: During metallurgy and food processing it is always necessary for real-time detection of stereo temperature distribution in reaction cavity of microwave. We focus on the stereo reconstruction of temperature distribution of the microwave reaction chamber using real-time image data captured by infrared sensor during heating process. Due to the stereo depth of the scene in humans vision system is mapped through disparity, thus the key technology stereo reconsturction of temperature is to find the corresponding points in the image pair. We employ the block matching algorithm to search the correspondence by minimizing a cost function derived on a hierarchical tree structured pyramid probabilistic graph. The study result shows that in the back projection of the depth map, the walls, ceiling, and floor of the microwave reactor all appear mutually orthogonal, which illustrates that the stereo temperature field is well reconstructed.
1007
Abstract: The effect of the Stream line Inter-relevant Successive Trees (SIRST) is great in the field of text search. In this paper, the binary SIRST will be extended to a ternary one. Whats more, we propose the method of calculating the K of K-ary SIRST. Finally, a sliding window retrieval algorithm is proposed. Experimental results show that the K-ary SIRST is more efficient than the binary one in the mass of information retrieval, and sliding window search algorithm is more efficient than the normal one.
1011
Abstract: Affine-invariant matching is one of the challenging fields for image matching. Although several algorithms (ASIFT, Fair-SURF) have been proposed and achieved expressive performance, all these method need to simulate perspective changes and exhaust all possible match which is of high computation complexity (O(N2)). In this paper, we proposed a new method to introduce global descriptor to filter out much unnecessary coarse matches for the matching procedure. Specially, the computation complexity of matching procedure in our method reduces significantly to O(N). But, the experiment result shows that the proposed method can achieve comparable performance to ASIFT at much lower cost.
1019
Abstract: This paper focuses on the video distortion which is caused by the packet loss. Considering the relationship between the human visual perception which is caused by the packet loss and the visual characteristic of the video content, we present a no-reference model for video quality assessment based on Support Vector Machine. The feature vector of the SVM contain temporal complexity, spatial complexity, the average number of bits per frame and the packet loss rate. Temporal complexity, spatial complexity and the average number of bits per frame represent the visual characteristic of the video content. The value of the packet loss rate means the distortion which is caused by the packet loss intuitively. Experimental results show that this model has a good consistency with the subjective.
1024

Showing 211 to 220 of 429 Paper Titles