[4]
The one-variable function can be found by N+2 interpolating patterns, which describes the interpolating function by the knots , and is called weight function between the jth output point (neuron) and the ith input point (variable). The zsj denotes the output values of the network's jth neuron, and the zj indicates the target patterns of the jth point of the network. In order to find, the definition of best square approximation is firstly given below. Suppose that function, is a subset of. Suppose that the function can satisfy the following formula.
DOI: 10.7717/peerj.4857/fig-5
Google Scholar
[5]
The is the approximation function in the subset of . If the approximation polynomial is expressed in the following.
Google Scholar
[6]
The coefficients of (6) for the approximation polynomial can be expressed as the following , (7) On the condition of multi-dimensional input and multi-dimensional output, the error of orthogonal weight function neural networks can be expressed as.
Google Scholar
[9]
where . The expressions (8) and (9) are important for some applications of generalization. Sensitivity Analysis Suppose that is the vector of input patterns, and is the vector of perturbation, then the disturbed input patterns can be expressed as following.
Google Scholar
[10]
In this paper, two kinds of sensitivity are discussed, i. e. the sensitivity of theoretical error and the sensitivity of approximation error. Below gives the theoretical error.
Google Scholar
[11]
The theoretical error of a system can include model error, and approximation error. Model error can be expressed as and , and the approximation error is. Assume that vector of the p-th noise-free patterns input is, where . In this case, the theoretical output of the j-th neuron is.
Google Scholar
[12]
Assume that the noise vector of input patterns is , the theoretical output caused by noise is.
Google Scholar
[13]
The output error of theoretical noise of j-th neuron is.
Google Scholar
[14]
Suppose that the function is as following.
Google Scholar
[15]
Formula (14) can be transformed as follows.
Google Scholar
[16]
In (16), is the value of the k-th item of orthogonal polynomials, , and . When the perturbations of input patterns tends to zero, we have.
Google Scholar
[17]
where (18) The output perturbations can be expressed in the following.
Google Scholar
[19]
The definition of statistical sensitivity for weight function neural networks can be expressed as.
Google Scholar
[20]
Assume that the input variables are independent, we have.
Google Scholar
[22]
the formula (21) can be expressed in the following.
Google Scholar
[23]
The theoretical sensitivity is.
Google Scholar
[24]
We can calculate the sensitivity values based on theoretical error by (24). And the sensitivity of approximation error is.
Google Scholar
[25]
Example From the results of sensitivity analysis given in this paper, we see that, when the disturbance of input patterns is increased, the corresponding error of the network's output will also be increased. By this principle we can remove some undetected patterns for higher detection rate. The example given below describes the intrusion detection by orthogonal weight function neural networks. The network's architecture used in this example is 24-1, which means that there are 24- dimensional input nodes and 1-dimensional output node. This example adopts 540 data patterns, including back Ipsweep, Satan attack, attack and normal data stream, and some remaining undetected data. We select a group of data as learning patterns, calculate the perturbation of other data patterns (test patterns), and then according to the values of sensitivity, remove those intrusion detection data with large values of perturbation. The simulation results are as follows. Fig. 1 Intrusion detection experiment with original data Fig. 2 Curve of sensitivity Fig. 3 Intrusion detection experiment with optimized data Fig. 1 shows the classification results of the histogram with 540 intrusion detection data sets. Fig. 2 shows that actual sensitivity curve of 540 invasion of input disturbance, as can be seen, in range of [0. 95, 1], numerical value of sensitivity has a sudden sharp change, which means that the invasion of network input disturbance exerts an enormous influence. Because intrusion detection information input perturbation is accomplished by normalization of, to [0. 95, 1] for boundaries, the input information of the original data can be deduced, and then these information will be removed. Using the sensitivity analysis given in this paper, we can improve the detection efficiency, see Fig. 3. Acknowledgment This work was supported by the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (yx002001). I thank T. Guo, my graduate student, for her simulation examples given in this paper. References.
Google Scholar