Advanced Optimal Buffer Scheduling Policy in Opportunistic Networks

Opportunistic Networks (ONs) are the newly emerging type of Delay Tolerant Network (DTN) systems that opportunistically exploit unpredicted contacts among nodes to share information. As with all DTN environments ONs experience frequent and large delays, and an end-to-end path may only exist for a brief and unpredictable time. In this paper, we employ optimal theory to propose a novel buffer management strategy named Optimal Buffer Scheduling Policy (OBSP) to optimize the sequence of message forwarding and message discarding. In OBSP, global optimization considering delivery ratio, transmission delay, and overhead is adopted to improve the overall performance of routing algorithms. The simulation results show that the OBSP is much better than the existing ones.


Introduction
Delay Tolerant Networks (DTNs) [1], based on the store-carry-and-forward routing principle [2], are with intermittent connectivity, high latency and without end-to-end path, which makes routing in DTNs a challenge problem. Opportunistic Networks (ONs) are a newly emerging type of DTNs systems that can opportunistically exploit unplanned contacts between nodes to share information. In general, there are many recent proposed routing algorithms [3][4][5][6] to address the problem. In the case of limited buffer resources, the node carrying a copy of each message will be quickly saturated its cache, and can not continue to store copies of incoming messages. Therefore, the design of efficient cache management strategy to improve opportunities for the overall performance of the network is particularly important.
In this paper, we study the problem of buffer management in ONs. Firstly, we propound an optimal buffer forwarding and discarding strategy considering the delivery ratio, the delivery delay, and overhead. Then, we offer a method based on creating the lists maintained by each node to calculate the number of copies of message i at time t, ni(t). Finally, we have implemented many simulations and compared it to some other buffer policies in simulator ONE [7]. The simulation results show that our policy outperforms existing policies.
The rest of this paper is organized as follows. Section II goes over some related work. In section III, we discuss the core idea of buffer optimization. The section IV displays simulation results and compares it with other buffer strategies. Finally, section V concludes the paper.

Related work
In order to improve network performance, such as to improve message delivery success rate, and reduce the transmission delay, the researchers propose a variety of cache management strategy.
In paper [8], the authors deduce the optimal function for independent metrics including delivery ratio and delay in the case of large enough bandwidth. However it does not considers overhead cost and the bandwidth is often limited in reality. Developing and studying prioritization schemes for messages such that nodes can forward high priority messages and drop low priority messages is proposed in [9].However, this policy is based on the history information of nodes which in reality we can not get easily. In this case, we cannot guarantee that the policy in paper [9] is really useful in real application. K.Shin and S.Kim [10] proposed two utility functions in order to maximize the message delivery success rate and minimize the average message delay based on message properties, such as the number of copies of the message, the message of time to live (TTL) and the remaining TTL. In addition, [10] proposed a method of calculating the number of copies of the message. Since obtained copies of the message are inconsistent after two nodes meet to exchange information, the use of this method is restricted.
In this paper, we employ optimal theory to optimize buffer scheduling strategy including forwarding and discarding policy. And we deduce the optimal functions considering the delivery ratio, the average transmission delay, and the average overhead cost, respectively.

Buffer optimization Policy
In this section, we first study the problem of choosing which message to send first and choosing which message to discard first when a node's buffer is full. We deduce the optimal functions for independent performance metrics respectively, such as average delivery ratio, average delivery delay, and average overhead cost. To formulate the optimal buffer scheduling policy (OBSP), we only assume that the meeting time between nodes adheres to exponential distribution. In Table 1 we list the main parameters that we will use in this paper. Maximizing the average delivery ratio. We assume that a number of messages, which all have the finite TTL value, are propagated in the network using replication. The source of the message keeps a copy of it during the whole TTL duration, while intermediate nodes are not obliged to do so.

Smart Technologies for Communication
Number of nodes (excluding source), ki(Ti), that have seen message i since its creation until elapsed time Ti, includes the messages which has dropped by some nodes, so we have the following equation: Then we get the function for metric average delivery ratio. The probability that message i has been already delivered is equal to: The delivery ratio of message i is: The average delivery ratio of all generated messages is: We differentiate dr with respect to ni(t), then we get Eq. 5. In order to maximize dr , we should get the largest dr ∆ . Eq.5 is decreasing function about ni(t), so it is equal to send message with the smallest n i (t) first, and drop the message with the largest n i (t) first when the buffer is full until the free space of the buffer is available for the new coming message.
Minimizing the average delivery delay. By a similar way, we get the optimal function for metric average delivery delay.
The delivery delay of message i is: The average delivery delay is: We differentiate d with respect to ni(t)，then we get Eq. 8.
In order to minimize d , we should minimize d ∆ . Eq. 8 is increasing function about n i (t), so it is equal to send message with smallest n i (t) first, and drop the message with largest n i (t) first when the buffer is full.
Minimizing the average overhead cost. Using a similar method, we get the optimal function for metric overhead as well.
The overhead cost of message i is:

Advanced Engineering Forum Vol. 4 15
The average overhead cost of all messages generated is： ( ) We differentiate i oc with respect to ni(t)，then we get Eq. 11.
K.Shin and S.Kim [10] obtained by simulation using EBMP strategy and other policies such as FIFO, Random, etc., most of the messages were delivered in less than 10 hops to the destination nodes. Because ONs have a lot of nodes and at a time the number of copies of message i is far less than the number of nodes in the network that is 2 ( ) 0 n i N n t − > , and 0 i oc ∆ > , oc i is an increasing function on n i (t). Therefore to minimize oc , we should discard the copy of message with the largest of oc i when the buffer is full, equivalent to drop the copy of message with the largest n i (t) until the remaining buffer space to accommodate the new incoming messages. Contrarily, we should forward the message with the smallest n i (t) first. Establishing the path list for every generated message. In order to maximize dr , minimize and minimize oc, we know that it is reasonable to send message with smallest n i (t) first, and drop the message with largest n i (t) first when the buffer is full after the preceding analysis. Then we need to know the value of n i (t) , the number of copies of message i at time t. Unfortunately, it is almost impossible for us to find the accurate value of n i (t) owe to intermittent connectivity, and without end-to-end path in ONs. Our proposed solution is to get the close to the precise value.  Fig. 1 The changes of lists maintained by each node after they meet We do this by establishing a path list for every generated message and implementing a learning process to gather the global information about which nodes each of generated message has passed. Every node maintains a list of all the passed nodes of each message carried by them. Two nodes exchange their lists of the messages carried by them, when they meet each other. If message i carried by node 1 has passed the nodes including {1,4,7,8,11} and message i carried by node 2 has passed the nodes including {2,5,7,8,12}, the nodes that message i carried by both of them has passed will change into {1,2,4,5,7,8,11,12} after they meet each other as described in Figure 1.
After a period of time, the nodes that are carrying message i will all have the similar value of the number of nodes that message i has passed in that time. In the same manner other generated messages will all get the approximate value of the number of nodes that they have passed.

Simulation results and analysis
In this section, we will verify the optimal message dropping and sending schemes by simulation. We employ epidemic routing protocol and the Random Waypoint (RWP) model to evaluate our policy and other existing classical strategy. In the results of simulation, we focus on the delivery ratio, the average delivery delay and the average network overhead. Simulation Environments and Parameter Settings. We apply the Opportunistic Network Environment simulator (ONE) as the simulator. In this simulator, we assume that all the nodes are with the same communication ranges, and when every two nodes are in communication range, they can setup the connection, and stop transferring when they are out of communication range. We do not consider the impact of signal fading and signal interfering as well. The concrete parameter settings are list in Table 2. In the certain circumstances of the certain number of nodes in the network and the transmission range, the simulation obtained message delivery rate, average transmission delay and network overhead of several policies as the buffer size in the network changes to changes comparison of the results are shown in Figure 2 Fig. 2, we can find that for every buffer strategy the average delivery rates will almost all increase with the buffer size increasing and our strategy outperforms other buffer policies on the aspect of the average delivery ratio in Epidemic routing algorithm. In this figure, our strategy obviously shows its advantage over other buffer management strategies. That is mainly because our strategy gives the messages with the relative less copies more forwarding opportunities.
On the aspect of average delivery delay shown in Fig. 3, we can find that our policy is also the best among these buffer management strategies. In Epidemic routing algorithm, OBSP obtains the least time delay, because the messages with the relative less copies get more chance to be sent. The messages with the relative more copies have been in the networks for a relatively long time, however, it still does not reach the destination, so we think the old messages has slim probability to be arrived at the destination in the future. Thus, it is reasonable to send the new messages first.

Advanced Engineering Forum Vol. 4
On the aspect of average overhead cost shown in Fig. 4, we can find that our strategy gains the least overhead cost. That is because our policy discards the copy of the message with most copies which contribute to the increasing of overhead cost, while the relatively newly created messages with fewer copies in ONs have little impact on overhead cost.

Conclusion
In this paper, we propound an optimal buffer forwarding and discarding strategy for ONs. Our policy improves the overall performance of routing algorithms in ONs. In order to testify the superiority of proposed policy, we have implemented many simulations and compared it to some other buffer policies in simulator ONE. The simulation results show that our policy outperforms existing policies with respect to delivery rate, delivery delay and overhead cost.
Note that in this paper, we consider that all messages have the same TTL (time to live). In future we plan to consider different TTL and combine OBSP with other routing algorithms.