Applied Mechanics and Materials
Vol. 708
Vol. 708
Applied Mechanics and Materials
Vol. 707
Vol. 707
Applied Mechanics and Materials
Vol. 706
Vol. 706
Applied Mechanics and Materials
Vol. 705
Vol. 705
Applied Mechanics and Materials
Vol. 704
Vol. 704
Applied Mechanics and Materials
Vol. 703
Vol. 703
Applied Mechanics and Materials
Vols. 701-702
Vols. 701-702
Applied Mechanics and Materials
Vol. 700
Vol. 700
Applied Mechanics and Materials
Vol. 699
Vol. 699
Applied Mechanics and Materials
Vol. 698
Vol. 698
Applied Mechanics and Materials
Vol. 697
Vol. 697
Applied Mechanics and Materials
Vol. 696
Vol. 696
Applied Mechanics and Materials
Vol. 695
Vol. 695
Applied Mechanics and Materials Vols. 701-702
Paper Title Page
Abstract: In order to improve damage diagnosis ability of maintenance personnel, constructing method of Bayesian network applied to weapon battlefield damage diagnosis is researched. Battlefield damage correlations among damaged parts of weapon are analyzed if one weapon is attacked by bombshells, and is the basis of damage diagnosis with the use of Bayesian network. Bayesian network for damage diagnosis is constructed based on K2 arithmetic. Variables sequence is the key factor of Bayesian network constructing, a statistical method of ascertaining variables sequence is presented with the use of weapon battlefield simulation technology.
98
Abstract: In this paper, by using the Euler-Maclaurin expansion, we establish an inequality of a weight coefficient. Using this inequality, we derive a reverse Hilbert's type inequality. As applications, an equivalent form is obtained.
106
Abstract: This paper reports a comparative study of feature selection algorithms on a hyperlipimedia data set. Three methods of feature selection were evaluated, including document frequency (DF), information gain (IG) and aχ2 statistic (CHI). The classification systems use a vector to represent a document and use tfidfie (term frequency, inverted document frequency, and inverted entropy) to compute term weights. In order to compare the effectives of feature selection, we used three classification methods: Naïve Bayes (NB), k Nearest Neighbor (kNN) and Support Vector Machines (SVM). The experimental results show that IG and CHI outperform significantly DF, and SVM and NB is more effective than KNN when macro-averaging F1 measure is used. DF is suitable for the task of large text classification.
110
Abstract: In this paper, we proposed a scheme to reconstruct the synchronous scrambler placed after a direct sequence spreader. The scheme was first based on the assumption that the spreading code was known and then extended to the unknown situation. The reconstruction algorithm took advantage of the properties of spreading code that was used to spread the input information sequence. We got a property that helped to detect the multiples of the feedback polynomial. The simulation results proved the validity of the proposed algorithm.
114
Abstract: The article used Minimization of K-L Information in the Hypothesis Testing,the convergent problems for the error probabilities of likelihood ratio were deeply discussed. The results show that for the error probabilities of likelihood ratio is power series convergent under the Minimization of K-L Information.
119
Abstract: In this paper, we evaluate several typical car-following models based on actual measurement data from real traffic condition in Xi’an city by presenting a visual comparison between real and simulated value of the velocity for vehicles. We find that FVDM can depict the real velocity change accurately. Besides, some popular models such as OVM, GFM and OV-DM are not very suitable for real traffic simulation in Xi’an city, although they can represent some traffic phenomena under certain condition.
124
Abstract: With the processing speeds of processors increasing rapidly, the inefficiency of disk I/O performance has been becoming the bottleneck of computer systems and network transmission. So, Redundant Arrays of Independent (or Inexpensive) Disks (RAID) with the purpose of promoting disks parallel accessing , disks space utilization and user data availability has been adopted by DAS and SAN, and has been applied widely. However, the more quantity and capacity that disks have, the more mistakes that would take place, so RAID availability analysis has become an important and urgent issue. The paper proposed a set of availability computational modeling for RAID storage system via Bernoulli trials and binomial distribution. On the basis of it, it can be analyzed that combination orders and methods have an effect on RAID availability in the state of mixed combination; it also can be analyzed that disk quantity is associated with RAID availability. Both of them provide powerful decision support and theoretical foundation for user selecting RAID construction solution, and help users save cost of storage.
129
Abstract: By revising geometry and topology information of boundaries, a robust clipping method is proposed for overlapped surface patches. All the boundary vertices of one patch are marked on the basis of their relationship with the boundary loops of the other patch. Then the overlap regions are recognized according to the vertices flags. If the overlap regions are slim and only one boundary curve in each patch is intersected, the corresponding parts of the curves are subdivided and their vertices are repositioned, and the interfered curves are sewed together. Otherwise, the boundaries are reorganized according to vertex flags. There are two candidate clipping schemes for each overlapped patches and the one with fewer boundary curves is the final result. Examples verify robustness of the algorithms.
136
Abstract: Frequent patterns mining is one of the most important tasks in data mining, traditional algorithms usually deal with this problem in simple structured data, but there are so much complex data in reality, for example, the tree type of data, graph type of data, and so on, when investigating these complex structured data, constrains are often needed to be given in order to narrow the search space, however, this will lose some of the useful interesting patterns. In this paper, we present a novel algorithm based on higher-order logic to discover frequent patterns in complex structured data, the novel method can overcome some drawbacks occurring in traditional algorithms. We use Escher, which is a higher-order logic programming language, to discover frequent patterns in complex structured data. Experimental results show that the proposal algorithm is efficient and scalable.
141
Abstract: This paper comes up with an stereo matching method for handy-scanner based on cross light. The handy-scan system with two cameras and one cross laser beam can scan very detail accurately through cross laser. 3D information is gained by dealing with cross laser image. What we do aims at estimating the transformation of the measured objects under the world coordinate. In order to achieve this goal, firstly we use 8-points algorithm to compute fundamental matrix. Then, the points on the left and the right can be matched based on fundamental matrix. The 3D coordinates of the point on the cross laser can be obtained by triangulation and curve surface differential. The experiment proves that this algorithm solves the problem of calculating the matching points in disorder. And in some way, it increases efficiency of matching cross laser stripes.
145