intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

the_handbook_of_ad_hoc_wireless_networks_10

Chia sẻ: Kata_8 Kata_8 | Ngày: | Loại File: PDF | Số trang:50

82
lượt xem
21
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Tham khảo tài liệu 'the_handbook_of_ad_hoc_wireless_networks_10', công nghệ thông tin, quản trị mạng phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả

Chủ đề:
Lưu

Nội dung Text: the_handbook_of_ad_hoc_wireless_networks_10

  1. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com of packet loss (loss rate), and so on. The QoS forum defines QoS as the ability of a network element (e.g., an application, a host, or a router) to provide some level of assurance for consistent network data delivery [13]. The IETF RFC 2386 [10] characterizes QoS as a set of service requirements to be met by the network while transporting a packet stream from source to destination. Different applications require different levels of assurance, or more exactly, QoS requirements, from the network. Real-time applications such as voice and video transmission need a packet by a certain time, otherwise the packet is essentially worthless; non-real-time applications such as file transfer and e-mail emphasize reliability instead. QoS support has been widely discussed in various networks. The Asynchronous Transfer Mode (ATM) network, for instance, has made a great emphasis on the support of QoS for traffic of different classes. Tremendous efforts have been made by the IETF to enhance the current Internet to support multimedia services. QoS mechanisms are applied to wisely manage network resources (bandwidth, buffer) to meet the various QoS requirements of applications. With the rising popularity of multimedia applications among end users in various networks and the widened scope of applications for ad hoc wireless networks (which are also called Mobile Ad hoc NETworks [MANETs]), QoS support of distributed, non–real-time, and real-time applications in MANETs has emerged as a major area of research. This issue, however, as Corson stated in [8], is really a challenging task faced by researchers. A network’s ability to provide a specific QoS depends upon the inherent properties of the network itself [25], which span over all the elements in the network. For the transmission link, the properties include link delay, throughput, loss rate, and error rate. For the nodes, the properties include hardware capability, such as processing speed and memory size. Above the physical qualities of nodes and trans- mission links, the QoS control algorithms operating at different layers of the network also contribute to the QoS support in networks. Unfortunately, the inherent features of MANETs show weak support for QoS. The wireless link has low, time-varying raw transmission capacity with relatively high error rate and loss rate. In addition, the possible various wireless physical technologies that nodes may use simul- taneously to communicate make MANETs heterogeneous in nature. Each technology will require a MAC layer protocol to support QoS. Therefore, the QoS mechanisms above the MAC layer should be flexible to fit the heterogeneous underlying wireless technologies [9]. In the literature, many proposals have been presented to support QoS in MANETs including QoS MAC protocols [1,17,20,21,27,28,30,36], QoS routing protocols [5,6,17,20,21,25,26], and resource res- ervation protocols [19,30]. While these proposals are sufficient to meet QoS needs under certain assump- tions, none of them proposes a QoS model for MANETs. It is obvious that a concrete definition of the type of service delivered to the user is needed as we look at the development of a network for commercial application and the diversity of the applications’ requirements. This description of the service delivered by the network is called the service model and documents the commitments the network makes to the clients that request the provided services. A QoS model describes a set of end-to-end services, and it is up to the network to ensure that the services offered at each link along a path combine meaningfully to support the end-to-end service [14]. More generally, QoS models permit clients to select a number of abstract guarantees that apply to a sequence of operations. These guarantees govern properties such as the timing, ordering, and reliability of the operation [29]. The system promises to ensure these properties for operations performed by clients or else inform them that the guarantees cannot be met and perhaps refuse to accept further operations. Different communi- cation networks extend their service models to permit multiple types of services from their own per- spectives. The ATM network starts with the assumption of existing Constant Bit Rate (CBR) services, and further services have been derived with relaxations of the bandwidth guarantees and timing con- straints. In the IETF, the Integrated Services (IntServ) model starts from the assumption of a best effort service and refines this to add guarantees of various kinds, typically of delay variance, bounds, and then throughput [11]. The Differentiated Service (DiffServ) model divides services into several classes with various requirements and priorities [3]. The purpose of this chapter is to look into the problem of QoS provisioning in MANETs from a systemic view, first by designing a suitable QoS model for MANETs after considering its features; second, © 2003 by CRC Press LLC
  2. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com by constructing a QoS framework architecture to realize the QoS model. The work on certain aspects of QoS issues in MANETs in the literature can thus fit into the framework architecture. The rest of this chapter is organized as follows. Section 28.2 discusses how the features of MANETs affect the design of a QoS model for MANETs. Section 28.3 proposes a Flexible QoS Model for MANETs (FQMM), and Section 28.4 presents a framework architecture to realize FQMM. Evaluation of FQMM is introduced in Section 28.5 and finally, conclusions are drawn in Section 28.6. 28.2 MANET Features and QoS Models A QoS model for MANETs should first consider the features of MANETs, e.g., dynamic topology, time- varying link capacity, and limited power [33]. In addition, as applications of MANETs in the civilian sector become more popular, we assume that MANETs will be seamlessly connected to the Internet in the future. Therefore, the QoS model for MANETs should also consider the existing QoS models for the Internet, viz., IntServ and DiffServ. However, these models are aimed at wired networks. The applicability of IntServ and DiffServ in MANETs therefore needs investigation. The idea behind IntServ [4] is borrowed from the paradigm of the telephony world, i.e., adopting a virtual circuit connection mechanism. The Resource ReSerVation Protocol (RSVP) [35] is used as a signaling protocol to set up and maintain the virtual connections. Routers apply corresponding resource management schemes, e.g., Class Based Queueing (CBQ) [12], to support the QoS requirements of the connection. Based on these mechanisms, IntServ provides quantitative QoS for every flow.1 However, such per-flow provisioning results in a huge storage and processing overhead on routers, which is the well-known scalability problem of IntServ. The tenet of DiffServ [3] is to use a relative-priority scheme to soften the requirements of hard QoS models such as ATM and IntServ, thereby mitigating the scalability problem of the latter. At the boundary of a network, traffic entering the network is classified, conditioned, and assigned to different behavior aggregates by marking the DiffServ field [22] in an IP packet header. Within the core of the network, packets are forwarded according to the Per-Hop Behavior (PHB) [2] associated with the DiffServ field. Implicit reservation is done in the form of a service level agreement, which is agreed upon between users and network providers. DiffServ provides qualitative QoS support for aggregate flows. We now look into the effects of the salient features of MANETs on the possible QoS model for MANETs. 28.2.1 Multiple Node Functionalities Each node in MANETs has two functions, i.e., host and router/switch. As a router, a node routes packets for other nodes similar to what the backbone routers do in the Internet. Hence, a MANET is similar to a backbone network in the sense of the functionalities of nodes, although the size of a MANET is not comparable with that of the Internet backbone. DiffServ is designed to overcome the difficulty in implementing and deploying IntServ and RSVP in the Internet backbone. Thus, the DiffServ approach may fit MANETs. In addition, DiffServ is lightweight in interior nodes as it does away with per-flow states and signaling at every hop. In MANETs, keeping the protocol lightweight in interior nodes is important since putting too heavy a load on a temporary forwarding node that is moving is unwise. Therefore, DiffServ and MANETs are also similar in that they are lightweight in interior nodes. Intuitively, these similarities imply a potential usage of the DiffServ approach in MANETs. 28.2.2 Node Mobility and Lack of Infrastructure Node mobility is the basic cause of the dynamics in MANETs. The MAC layer allocation of bandwidth to each node changes dynamically according to mobility scenarios. Consequently, the aggregate network capacity is also time varying. Furthermore, the absence of infrastructure (without even a base station) 1 A flow is an application session between a sender and a receiver. © 2003 by CRC Press LLC
  3. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com makes the control of bandwidth difficult. The roles of nodes as a host or router also change together with node mobility and the dynamic topology. All these dynamics in MANETs require the QoS model and the supporting QoS mechanisms to be adaptive. 28.2.3 Power Constraints In MANETs, nodes’ processing capability is limited because of limited battery power. This feature requires low processing overheads of nodes. Therefore, the control algorithms and QoS mechanisms should use bandwidth and energy efficiently, e.g., try to keep the inter-router information exchange to a minimum. However, in IntServ, all routers must have the four basic components: RSVP, admission control routine, classifier, and packet scheduler. Processing of all these functions is power-consuming and results in high processing overheads. Therefore, the IntServ approach is undesirable for power-constrained MANETs. 28.2.4 Limited Bandwidth and Network Size At first glance, this feature makes the scalability problem less likely to occur in MANETs. However, as fast radios and efficient low bandwidth compression technology develop rapidly, the emergence of high- speed and large-sized MANETs with plenty of applications is foreseeable. At that time, the pure IntServ approach for MANETs will inevitably meet the scalability problem as in high-speed fixed networks today. 28.2.5 Time-Varying Feature In MANETs, the link capacity is time varying due to the physical environment of nodes, such as fading and shadowing, the mobility of nodes, and the dynamics of the network topology. This time-varying feature makes the QoS mechanisms in MANETs more difficult than in wired fixed networks. Take the signaling protocol for example. A signaling protocol generally comprises three phases: connection estab- lishment, connection teardown, and connection maintenance. Corson [8] predicts that a larger percentage of link capacity will be allocated to control overhead in a network with smaller and time-varying aggregate network capacity. For MANETs with dynamic topology and link capacities, the overheads of connection maintenance usually outweigh the initial cost of establishing the connection. Therefore, RSVP-like sig- naling is not practical in MANETs. The time-varying feature thus makes it hard to provide short timescale QoS by trying to keep up with the time-varying conditions. However, it could be possible to provide QoS over a long timescale for MANETs. DiffServ is aimed at providing service differentiation among traffic aggregates to customers over a long timescale. In particular, Assured Service (AS) [7] is aimed at providing guaranteed, or at least expected, throughput for applications, and it is easy to implement. AS is attractive when throughput is chosen as an important QoS parameter for MANETs. In addition, AS is more qualitative oriented than quantitative oriented [16], and it is not easy, if not impossible, to provide quantitative QoS in MANETs with the physical constraints. Therefore, AS has a potential usage in MANETs. 28.3 FQMM — A Flexible QoS Model for MANETs Although DiffServ is generally favorable for application in MANETs as discussed in the last section, it is still not straightforward to adopt this approach in MANETs, since DiffServ has been designed for fixed and relatively high-speed networks. On the other hand, it is also desirable to incorporate suitable QoS features provided by IntServ into MANETs. This section describes a Flexible QoS Model for MANETs (FQMM) and its QoS provisioning policy, which takes into consideration the investigation in the previous section. 28.3.1 FQMM FQMM is designed for small- to medium-sized MANETs, with fewer than 50 nodes and using a flat nonhierarchical topology. It defines three kinds of nodes as in DiffServ: © 2003 by CRC Press LLC
  4. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com An ingress node is a mobile node that sends data. Interior nodes are the nodes that forward data for other nodes. An egress node is a destination node. For example, in Fig. 28.1, eight nodes are moving about, and a route is established for communication from node M1 to M6. When data is sent from M1 to M6, M1 behaves as an ingress node — classifying, marking, and policing packets. Nodes M3, M4, and M5 along the route from M1 to M6 behave as interior nodes, forwarding data via certain PHB defined by the DiffServ field. M6 is the egress node, which is the destination of the traffic. In this model, a MANET represents one DiffServ domain where traffic is always generated by applications running on an ingress node and terminating in an egress node. When nodes move about, another topology may form as shown in Fig. 28.2. In this case, there are two connections: one is from M1 to M6, the other is from M8 to M2, and the roles of the nodes are listed in Table 28.1. As illustrated, the nodes have dynamic roles in FQMM. M8 M3 M4 Egress Ingress Node Node M1 M6 M5 M2 M7 FIGURE 28.1 FQMM example 1. C1: Interior Node C2: Ingress Node M8 M1 M7 M6 C1:Ingress Node C1: Egress Node C1: Interior Node C2: Interior Node M5 C2: Interior Node M3 M4 C2: Interior Node M2 C2: Egress Node FIGURE 28.2 FQMM example 2. TABLE 28.1 Roles of Nodes in FQMM (Example 2) Connections Ingress Node Interior Node Egress Node C1 M1 M8, M7 M6 C2 M8 M7, M5, M4 M2 © 2003 by CRC Press LLC
  5. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 28.3.2 Hybrid Provisioning Policy Provisioning refers to the determination and allocation of resources needed at various points in the network. In the current Internet, provisioning is at a high level, quite coarse, and generally based on the estimation of the scale and utilization of the network. In IntServ, the provisioning policy is to realize ideal per-flow granularity by RSVP signaling along the route and reserving sufficient resources. Provi- sioning in DiffServ is per-class based with static provisioning performed by human agents on behalf of service providers or users, or dynamic provisioning by signaling or measurement. We propose a hybrid per-flow and per-class provisioning policy for FQMM. In such a scheme, traffic of the highest priority is given per-flow provisioning while other priority classes are given per-class provisioning. Although like DiffServ, FQMM has service differentiation, we can improve the per-class granularity to per-flow granularity for certain classes of high priority traffic since the traffic load in a MANET is typically less than in the backbone of the Internet. However, it is difficult to provide per- flow granularity to all the traffic types in a MANET due to bandwidth limitation and other constraints. Hence, we try to preserve the per-flow granularity for a small portion of traffic types in a MANET, given that these form a small percentage of the total traffic load. Since the states of per-flow granularity come from only a small fraction of the traffic, the scalability problem as in IntServ becomes insignif- icant. Therefore, this hybrid scheme combines the per-flow granularity of IntServ and per-class gran- ularity of DiffServ. 28.4 Framework Architecture of FQMM This section describes a framework architecture to realize FQMM (see Fig. 28.3). FQMM works in the IP layer with the cooperation of the MAC layer. There are two planes: one is the data forwarding plane shown below the dashed line; the other is the control and management plane shown above the dashed line. The components in the data forwarding plane handle the data packet forwarding function while the components in the management plane prepare for the operation of the data forwarding plane according to specific protocols or algorithms. Modules in the two planes communicate either directly or via the information stored in a database. For example, the traffic conditioner configuration module and the adaptive traffic conditioner module operate via the traffic control database. The resource reservation protocol sends or attaches the reservation information to the packet forwarding module directly. We elaborate on each module in the framework architecture below. Transport Layer Control & DB of Traffic Admission Manage- Link conditioner control state ment configuration Plane Layer DB of DB of Resource QoS Link QoS Traffic reservation routing measurement routing control protocol protocol IP Data Forward- Packet scheduling Adaptive Discri - Packet Classifier & buffer traffic ing minator forwarding management conditioner Plane Media Access Layer User data traffic stream Operation between components DB: Database Measuring, routing & signaling traffic stream FIGURE 28.3 Framework architecture of FQMM. © 2003 by CRC Press LLC
  6. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 28.4.1 Data Forwarding Plane The discriminator module classifies the data from the upper layer (transport layer) and the lower layer (MAC layer). Downstream packets coming from the transport layer are forwarded to the classifier. Upstream packets from the MAC layer are forwarded to the transport layer if the packet’s destination is the current node, and to the classifier if the packet’s destination is not the current node. The classifier module distributes the control and management packets to corresponding modules in the control and management plane, such as link measurement, resource reservation, and QoS routing. The classifier also maps incoming packets into classes by reading the DiffServ fields in each packet header. For simplicity, one class corresponds to one item in the service profile, which is stored in the database of traffic control. Therefore, a flow is mapped to one class with per-flow provisioning traffic while an aggregate is mapped to one class with the per-aggregate provisioning traffic. The adaptive traffic conditioner module polices and regulates the traffic based on the service profile, which is stored in the traffic control database. Marking, shaping, and dropping can be used alone or together in the traffic conditioner. We call it adaptive because the traffic conditioner configuration module adjusts the service profile according to the link states. Thus, the traffic conditioner can adapt to the dynamics of the network. The packet forwarding module forwards the incoming packets of different classes to a suitable output port. It also accepts control packets from modules in the control and management plane, such as link measurement, resource reservation protocol, and QoS routing protocol. If a proactive routing protocol is used, the packet forwarding module forwards packets after checking the packet’s destination address and the routing table. This module forwards the packet according to the routing header in the packet if a reactive routing protocol is used. The packet scheduling and buffer management module manages the queue(s) in the output port. Different scheduling schemes have been proposed in the literature from the simplest FIFO to various complex schemes. Several researchers have also investigated the problem of how to adapt the packet fair queueing algorithms to wireless networks. While the proposed solution is for a cellular wireless network model, it is not clear whether the same solution can be applied directly to MANETs. We will first study the performance of existing scheduling and buffer allocation approaches in FQMM and then find a suitable resource management scheme for the model. The configuration of the packet scheduling and buffer management is set in the database of the traffic control module. 28.4.2 Control and Management Plane The link measurement module measures/monitors the states of the channel and puts the results into the link state database. It may passively monitor link states only or also actively generate measurement packets into the network when necessary. Link measurement could be done on a wide area basis or on a local area basis. The former means that each node gets a picture of a rather wide area around itself by exchanging information with other nodes. The latter means that each node only gets the channel states around itself by applying an estimation algorithm. In existing routing protocols, this task is not explicitly done by a link measurement module, but together with the routing protocol. The traffic conditioner configuration module configures the service profile of applications and puts them into the traffic control database. Thus, the adaptive traffic conditioner module in the data forward- ing plane can police the data and control traffic according to the service profile. The traffic conditioner configuration module makes decisions depending on information stored in both the link state database and the traffic control database. The operation timescale of the link measurement and the traffic condi- tioner configuration modules affect the QoS of the applications. Therefore, the timescale should be set carefully. The admission control module is responsible for comparing the resource requirements arising from the requested QoS against the available resources. The decision whether sufficient resources are available for a new flow or an aggregate of flows at the requested QoS without violating the existing QoS com- mitments to other applications depends on resource management policies and resource availability. Once © 2003 by CRC Press LLC
  7. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com admission testing has been successfully completed, local resources are reserved immediately and then committed later if the end-to-end admission control test is successful. Admission control is invoked by the resource reservation protocol before reservation is executed. There are various admission control algorithms such as parameter based and measurement based. INSIGNIA [19], for example, adopts a simple max/min admission control algorithm. The resource reservation protocol module arranges for the allocation of suitable end-system and network resources to satisfy the user QoS specification. In doing so, the resource reservation protocol interacts with the QoS routing protocol to establish a path through the network in the first instance, then, based on admission control at each node, end-to-end resources are allocated. Reservation can be done per-flow or per-class. The protocol maintains per-flow/per-class specific state information of traffic and updates the traffic control database. For in-band signaling protocols for MANET such as INSIGNIA [19], the reservation control message is integrated with the data packet. The reservation scheme also relates to different MAC protocols. The QoS routing protocol is responsible for finding the path for a flow or aggregate of flows and maintaining the path at the required QoS level. A proactive routing protocol, such as DSDV [23], maintains the QoS routing database, while a reactive routing protocol, such as DSR [18], maintains a routing cache. The QoS routing protocol also sends control packets directly to the packet-forwarding module when necessary. The QoS routing module has a close relationship with the network architecture. Self-organized cluster structure is adopted in [15,25,26] since such a structure makes administration easier under certain conditions. On the other hand, Perkins and Royer use a flat network structure for QoS enhancement to the AODV routing protocol [24]. Databases contain state and control information needed for the operation of the modules. There are three databases: the link state database, the traffic control database, and the QoS routing database. The link state database is maintained by the link measurement module and visited by the traffic conditioner configuration module to do traffic conditioner configuration. The traffic control database is maintained by the traffic conditioner configuration module and the resource reservation protocol module. The information stored there includes the service profile, the states of the reservation for flows/aggregates, and other control states that are used to configure the parameters of the adaptive traffic conditioner module and packet scheduling and buffer management module. The QoS routing database is maintained by the QoS routing protocol and used by the packet forwarding module to determine the routes. The framework architecture that we have described is for a flat architecture assuming that every node has the same position and function. But in other kinds of network architecture (e.g., cluster), some special nodes (e.g., the master node in a cluster) may have more components than other nodes do. It is also possible that the functions of several components may be combined when necessary. 28.5 Evaluation of FQMM We adopt the “from simple to complex” policy in evaluating FQMM. The framework architecture presented in the previous section is to realize the hybrid QoS provisioning policy of FQMM described in Section 28.3.2. Not all the components in the framework are needed for different QoS provisioning. Coarse granularity QoS provisioning requires fewer components than fine granularity QoS provisioning does. 28.5.1 Service Prioritization in MANET The first evaluation is done by investigating service prioritization in MANETs. Service prioritization involves giving resource access priority to certain traffic types and has been used in networks for a long time. For example, network administration traffic typically has higher priority over normal user traffic, voice services should have higher priority than data services because of their real-time requirements, or web surfing can have higher priority than the File Transfer Protocol (FTP) and e-mail applications. © 2003 by CRC Press LLC
  8. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Since data services are normally run over the Transport Control Protocol (TCP) and multimedia services over the User Datagram Protocol (UDP), TCP sessions and UDP sessions are used to represent data services and multimedia services respectively. Three kinds of service profiles are considered, viz., “priority within TCP traffic,” “priority within UDP traffic,” and “priority between TCP and UDP traffic” [31,32]. We only elaborate on the service profile of “priority within TCP traffic” in this chapter (please see [31] for results on the other two service profiles). This service profile is defined as: there are two levels of priority for FTP services; one is higher than the other. This service profile is suitable for situations where some users have higher priority than other users. FTP here is used as the representative service running over TCP. Realization of the above service profile does not require the function of the resource reservation, the QoS routing protocol, and the admission control modules in the FQMM framework architecture (see Fig. 28.3). But the routing protocol is still needed. In addition, since the traffic profile is constant, the link measurement and link state database module can also be omitted. The modules in the control plane needed to realize the service prioritization are thus the traffic conditioner configuration and the database of traffic control modules. However, all the modules in the data forwarding plane are required. Figure 28.4 shows the modules we implement for service prioritization. In Fig. 28.4, we adopt two schemes in the module of “packet scheduling and buffer management’” to realize service prioritization. The first one is a priority buffer management scheme similar to the well- known Random Early Drop with In/Out (RIO) [7] scheme. Each packet is marked as HIGH (priority) or LOW (priority) first according to the policies in the service profile. In case of network congestion or the sensitivity of network congestion, the scheme drops LOW packets with a higher probability than HIGH packets. The second one is a priority scheduling scheme where packets of higher priority are scheduled more urgently than packets of lower priority. When a higher priority (HIGH) packet arrives, it is inserted before all the lower priority (LOW) packets in the queue. Thus, the higher priority packets are sent before the lower priority packets. In the case of network congestion, the last packet in the queue will be dropped, i.e., drop the tail of the queue. We simulate ten nodes in a small MANET in a 500 × 500 square meter area. Each node moves according to the random waypoint mobility model. Five mobility patterns are generated randomly with a maximum speed of 20 meters/second, and the pause duration time is varied from 0 to 300 seconds. Eight TCP sessions are generated randomly among the nodes, and these are divided into two groups: group 1 has four TCP sessions with lower priority; group 2 has four TCP sessions with higher priority. We evaluate the performance of TCP traffic under three separate cases according to the following priority cases. Transport Layer Control & Traffic Manage- conditioner ment configuration Plane Layer DB DB of Routing of Traffic protocol routing control IP Data Forward- Packet scheduling Adaptive Discri - Packet & buffer Classifier traffic ing minator forwarding management conditioner Plane Media Access Layer User data traffic stream Operation between components DB: Database Measuring, routing & signaling traffic stream FIGURE 28.4 Components for service prioritization in the framework architecture of FQMM. © 2003 by CRC Press LLC
  9. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Case 1: No priority is given to any packet. Case 2: Service prioritization is applied using the priority buffer management scheme. Case 3: Service prioritization is applied using the priority scheduling scheme. Figure 28.5 shows the average end-to-end delay of TCP packets in group 1 (lower priority) and group 2 (higher priority) in all three cases. The values in the figure are averages of simulations running over the five mobility patterns, and each simulation is run for 900 seconds. Compared with Case 1, Case 2 reduces the average delay of TCP sessions in both group 1 (see Fig. 28.5a) and group 2 (see Fig. 28.5b). However, Case 3 increases the delay of sessions in group 1 (see Fig. 28.5a) and reduces the delay of sessions in group 2 (see Fig. 28.5b). Furthermore, Case 3 achieves the least delay in group 2 among the three cases (see Fig. 28.5b). Thus, combining Figs. 28.5a and 28.5b, we observe that in terms of the average packet end-to-end delay, both Cases 2 and 3 give obvious priority to the higher priority packets, and Case 3 achieves a higher degree of prioritization than Case 2. This result can be explained by the way that the priority buffer management and priority scheduling schemes arrange packets in the buffer. The priority buffer management scheme does not change the order between the higher priority packets and the lower priority packets. Therefore the higher priority packets and the lower priority packets have the same chance to access the output channel. As a result, the average delays for the higher priority and the lower priority packets are about the same. On the other hand, the priority scheduling scheme keeps the lower priority packets in the buffer longer than the higher priority packets by inserting the higher priority packets before the lower priority packets. Therefore, the average delay of the higher priority packets is lower than that of the lower priority packets. Extensive simulations have been run [31], which lead to some major conclusions here. First of all, for most of the cases, the priority buffer management scheme and the priority scheduling scheme work well to realize service prioritization according to different service profiles. Therefore, the features of MANET, particularly its mobility and lack of infrastructure, do not affect the execution of services prioritization. 0.6 Average end-to-end delay (second) Case 1: Case 2: 0.5 Case 3: 0.4 0.3 (a) 0.2 0.1 0 0 50 100 150 200 250 300 Pause duration of node mobility (second) 0.6 Average end-to-end delay (second) Case 1: Case 2: 0.5 Case 3: 0.4 0.3 (b) 0.2 0.1 0 0 50 100 150 200 250 300 Pause duration of node mobility (second) FIGURE 28.5 Average delay of TCP packets under the three cases: (a) Average delay of TCP packets in group 1 (lower priority); (b) Average delay of TCP packets in group 2 (higher priority). © 2003 by CRC Press LLC
  10. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com We also compare the performance results of the two priority schemes under the three service profiles. For prioritization within TCP traffic, the priority schemes work well together with the congestion and flow control protocol of TCP. Simulation results show that the priority scheduling scheme outperforms the priority buffer management scheme in terms of the average throughput of TCP sessions and the average end-to-end delay of TCP packets. For prioritization within UDP traffic and prioritization between UDP and TCP traffic, UDP traffic does not respond to traffic congestion and routing failure. When the network congestion is enforced, both the priority buffer management scheme and the priority scheduling scheme work well, and the former scheme outperforms the latter in terms of the average goodput and loss rate of UDP sessions. However, when the network is not congested, service prioritization with UDP traffic is not achieved due to the overwhelming effects of routing failure. To our understanding, this implies that the routing issue is one of the basic issues to be dealt with in MANETs. 28.5.2 Service Differentiation in MANET The second evaluation is done together with the investigation of service differentiation in MANETs. Service differentiation means that traffic is differentiated into a set of classes, and network nodes provide priority- based treatment based on these classes without explicit resource reservation. Because of its scalability and relative simplicity, service differentiation is one of the major approaches to provide QoS in networks, e.g., DiffServ for the Internet. Service differentiation can be further categorized into absolute service differenti- ation and relative service differentiation. In absolute service differentiation, an admitted user/class is assured of the requested performance level, such as packet delay, jitter, throughput, and loss rate. The disadvantage is that a user will be rejected if the required resources are not available and the network cannot provide the requested assurances. In relative service differentiation, on the other hand, the assurance from the network is no longer an absolute metric to one class, but the relative relationship between/among classes. We first study the performance of TCP with AS [7] in MANETs with absolute bandwidth differenti- ation, where the service profile is defined as an absolute target rate of TCP sessions. Simulation results [34] show that service differentiation in MANETs is possible but the differentiation is not consistent, i.e., the required absolute target rate fails to remain the same over time. This is due to the fact that network conditions in a MANET, e.g., mobility, topology, and the link capacity a node sees when it wants to send packets, change dynamically. The absolute differentiation scheme does not adapt to the dynamic network conditions in MANETs, therefore resulting in the differentiation inconsistency. A relative differentiation scheme for MANETs is proposed and evaluated in FQMM. The service profile is defined as a relative target rate, which is a percentage of the effective link capacity and ranges between 0 and 1. The effective link capacity is the available bandwidth a node can use to send out packets. It depends on many factors in MANETs, such as power constraints, node mobility, dynamic topology, collision, routing, and traffic load. To realize the relative service differentiation in the FQMM framework (shown in Fig. 28.3), the admission control module, the QoS routing and database of QoS routing modules, and the resource reservation protocol modules are not required. However, the link measurement and the database of link state modules are required to get the effective link capacity. The components to realize TCP with AS using relative service differentiation are shown in Fig. 28.6. In Fig. 28.6, the packet scheduling and buffer management module adopts the RIO [7] scheme. The traffic conditioner module is realized by a token bucket profile meter to mark each packet as IN/OUT. The parameters of the token bucket, i.e., token rate ρ, and bucket length, β, are adaptively adjusted by the traffic conditioner configuration module. Parameters of the token bucket profile meter of a certain session i are as follows: ρi = γ i × Ct × R (28.1) βi = γ i × Ct × L (28.2) © 2003 by CRC Press LLC
  11. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Transport Layer Control & DB of Traffic Manage- Link conditioner state ment configuration Plane Layer DB DB of Link Routing of Traffic measurement protocol routing control IP Data Forward- Packet scheduling Adaptive Discri - Packet Classifier & buffer traffic ing minator forwarding management conditioner Plane Media Access Layer User data traffic stream Operation between components DB: Database Measuring, routing & signaling traffic stream FIGURE 28.6 Components for service differentiation in the framework architecture of FQMM. where γi is the relative target rate of session i, ρi is the token generation rate, βi is the token bucket length, Ct is the effective link capacity, and R and L are constants. The link measurement module is in charge of the determination of the effective link capacity Ct, which is not an easy task. A simple case study using a string topology is presented in [34]. There, since nodes’ movement is restricted to along the string (see Fig. 28.7), the effects of both mobility and routing can be ignored. It is further assumed that one-way TCP sessions are only generated between the two terminal nodes of the string, and power constraint has no effect. TCP reacts to the loss and congestion although it assumes packet loss due to transmission error of the wireless link as congestion. From previous simulation results, it is found that the total TCP throughput changes with the number of hops that TCP sessions cross. The main factor that affects the total throughput is therefore the number of hops that the TCP sessions cross. Therefore, Ct=Nh, where Nh is the total TCP throughput when TCP sessions cross h hops. We simulate using six nodes to form a string topology. Four TCP sessions are generated from node 0 to node 5. The four sessions are divided into two groups and each group has two sessions. Sessions in group 1 have a relative target rate of 0.4 and sessions in group 2 have a relative target rate of 0.1. Thus, the four sessions together have a total target rate of 100% of the available bandwidth. All the nodes are static during the simulation except node 0, which moves along the string. Simulations are run for 900 seconds. Once the simulation is started, node 0 moves towards node 5 at a constant rate, then turns back towards its original position when it meets node 5. As a result, the number of hops that the TCP sessions travel varies with the simulation time as shown in Fig. 28.8a. Node 0 adapts the token bucket profiler according to the number of hops between itself and node 5. For simplicity, the adaptive token bucket profiler is inserted into the scenario file, which is used in the NS simulator. TCP throughput of each session is measured every 10 seconds during the simulation. TCP source TCP sink 0 1 2 N n hops FIGURE 28.7 String topology. © 2003 by CRC Press LLC
  12. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 6 Number of hops 5 4 3 (a) 2 1 0 0 100 200 300 400 500 600 700 800 900 simulation time: sceond 0.7 session 1 session 2 0.6 session 3 session 4 Relative TCP throughput 0.5 target rate of group 1: 0.4 0.4 (b) 0.3 0.2 0.1 target rate of group 2: 0.1 0 0 100 200 300 400 500 600 700 800 900 simulation time: second session 1 500 session 2 session 3 session 4 TCP throughput: kbps 400 300 (c) 200 100 (a) 0 0 100 200 300 400 500 600 700 800 900 simulation time: second FIGURE 28.8 Relative service differentiation. Four TCP sessions are divided into two groups with target rates of 0.4 and 0.1, respectively. Simulation runs for 900 seconds. (a) Number of hops varies with simulation time; (b) Relative TCP throughput; (c) Absolute TCP throughput. Figure 28.8b presents the relative TCP throughput of each session. When TCP sessions cross one hop (from 400s to 500s) and two hops (from 300s to 400s and from 500s to 600s), the TCP throughput of sessions in group 1 and group 2 are clearly differentiated. In addition, the achievable throughput of each session is stable and does not vary much. However, when TCP sessions cross three hops (from 300s to 400s and from 600s to 700s) and four hops (from 200s to 300s and from 700s to 800s), the throughput fluctuates much more. Even worse is that the throughputs of the two groups cannot be differentiated when TCP sessions cross five hops (from 0s to 100s and from 800s to 900s). This performance degradation is caused by the fact that the longer the TCP session travels, the higher the probability that the TCP packets and ACKs get dropped and collide with each other along the path. These factors make it difficult for the traffic conditioner and RIO to control the TCP throughput differentiation. As a whole, Fig. 28.8b shows that the throughput of sessions belonging to the two groups is differen- tiated, and the differentiation is consistent with the number of hops varying from one up to four. Sessions © 2003 by CRC Press LLC
  13. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com in group 1 achieve throughput around a value smaller than its target rate of 0.4. Sessions in group 2 achieve throughput around a value larger than its target rate of 0.1. Fig. 28.8c presents the absolute throughput of the four TCP sessions. It again validates that the service profile cannot be kept consistent if absolute service differentiation is used. 28.6 Conclusion and Future Work We have described FQMM, a Flexible QoS Model for MANETs, after investigating the suitability of the Internet IntServ and DiffServ models for MANETs. To our knowledge, this is the first such QoS model proposed for MANETs. A framework architecture to realize FQMM has also been presented. Each component in the framework architecture is described, and the interaction between components is discussed. Two evaluations of FQMM are presented, i.e., performance of service prioritization and service differentiation in MANETs. We say that the QoS model is flexible because of its features: 1. Nodes have dynamic roles. 2. The provisioning policies are hybrid and flexible. 3. The components of the framework to realize FQMM can be combined in a flexible manner to meet different network situations and QoS requirements. To realize the hybrid QoS provisioning policy, FQMM should first provide integrated service. The supporting modules such as resource reservation protocol, QoS routing protocol, and admission control will be implemented in the future. We may use the existing reservation protocols such as INSIGNIA [19]. The QoS routing protocol may extend the existing best effort routing protocols that adopt a flat- architecture, such as Dynamic Source Routing (DSR) [18] or Ad-hoc On-demand Distance Vector (AODV) routing [24], to include QoS options. After implementing the integrated services in FQMM, we can realize the hybrid QoS provisioning policy. We will compare the performance of the hybrid provisioning policy with the pure integrated service and the pure differentiated service policy in terms of system complexity, flexibility, and ability to provide QoS requirements. One missing piece of QoS support in FQMM is the QoS MAC protocol. The IEEE 802.11 MAC protocol has been used in the simulations. However, this is a MAC protocol without any QoS option. As the MAC protocol is blind to the higher layer QoS mechanisms, the whole system is less efficient than is expected from a protocol that is QoS aware. In the future, we will define a QoS MAC protocol for FQMM. References [1] Alwan, A., Bagrodia, R., Bambos, N., Gerla, M., Klenrock, L., Short, J., and Villasenor, J., Adaptive Mobile Multimedia Networks, IEEE Personal Communications, Apr. 1996, pp. 34–51. [2] Black, D., Brim, S., Carpenter, B., and Faucheur, F.L., Per Hop Behavior Identification Codes, Internet IETF RFC3140, June 2001. [3] Blake, S., An Architecture for Differentiated Services, Internet IETF RFC2475, Dec. 1998. [4] Braden, R., Clark, D., and Shenker, S., Integrated Services in the Internet Architecture — An Overview, IETF RFC1633, June 1994. [5] Chen, S. and Nahrstedt, K., Distributed quality-of-service routing in ad hoc networks, IEEE Journal on Selected Areas in Communications, 17, 1488–1505, 1999. [6] Chen, T., Gerla, M., and Tsai, J.T., QoS Routing Performance in a Multi-Hop Wireless Network, Proceedings of IEEE ICUPC ’97, 1997. [7] Clark, D.D. and Fang, W., Explicit allocation of best-effort packet delivery service, IEEE/ACM Transactions on Networking, 6, 362–373, Aug. 1998. [8] Corson, M.S., Issues in Supporting Quality of Service in Mobile Ad Hoc Networks, IFIP 5th Int. Workshop on Quality of Service — (IWQOS ’97), May 1997. © 2003 by CRC Press LLC
  14. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com [9] Corson, M.S. and Campbell, A.T., Towards Supporting Quality of Service in Mobile Ad Hoc Networks, Proceedings of the First Conference on Open Architecture and Network Programming, San Francisco, Apr. 1998. [10] Crawley, E., Nair, R. Rajagopalan, B., and Sandick, H., A Framework for QoS-Based Routing in the Internet, Internet IETF RFC2386, Aug. 1998. [11] Crowcroft, J. and Wang, Z., A Rough Comparison of the IETF and ATM Service Models, IEEE Network, Nov. 1995, pp. 12–16. [12] Floyd, S. and Jacobson, V., Link-sharing and resource management models for packet networks, IEEE/ACM Transactions on Networking, 3, 365–386, Aug. 1995. [13] QoS Forum, QoS Protocols and Architectures, White Paper of QoS Forum, July 1999, http:// www.qosforum.com. [14] Gevros, P., Crowcroft, J., Kirstein, P., and Bhatti, S., Congestion Control Mechanisms and Best Effort Service Model, IEEE Networks, May 2001, 16–26. [15] Hsu, Y.C., Tsai, T.C., and Lin, Y.D., QoS Routing in Multihop Packet Radio Environment, Proceed- ings of IEEE ISCC ’98, 1998, pp. 582–586. [16] Ibanez, J. and Nichols, K., Preliminary Simulation Evaluation of an Assured Service, IETF Internet Draft, work in progress, Aug. 1998. [17] Iwata, A., Chiang, C.C., Yu, G., Gerla, M., and Chen, T.W., Scalable routing strategies for ad hoc wireless networks, IEEE Journal on Selected Areas in Communications, 17, 1369–1379, 1999. [18] Johnson, D.B., Routing in Ad Hoc Networks of Mobile Hosts, IEEE Workshop on Mobile Computing Systems and Applications, Dec. 1994, pp. 158–163. [19] Lee, S.B. and Campbell, A.T., INSIGNIA: In-band Signaling Support for QoS in Mobile Ad Hoc Networks, 5th Int. Workshop on Mobile Multimedia Comm. (MoMuc’98), Berlin, Oct. 1998. [20] Lin, C.R. and Gerla, M., Asynchronous Multimedia Multihop Wireless Networks, Proceedings of IEEE INFOCOM ’97, Kobe, Japan, Apr. 1997, pp. 118–125. [21] Lin, C.R. and Liu, J.S., QoS routing in ad hoc wireless networks, IEEE Journal on Selected Areas in Communications, 17, 1426–1438, 1999. [22] Nichols, K., Blake, S., Baker, F., and Black, D., Definition of the differentiated services field (DS field) in the IPv4 and IPv6 headers, Internet IETF RFC2474, Dec. 1998. [23] Perkins, C.E. and Bhagwat, P., Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers, Proceedings of ACM SIGCOMM ’94, London, Aug. 1994, pp. 234–244. [24] Perkins, C.E. and Royer, E.M., Ad Hoc On-Demand Distance Vector Routing, IEEE WMCSA ‘99, New Orleans, 1999, pp. 90–100. [25] Ramanathan, R. and Streenstrup, M., Hierarchically organized multihop mobile wireless networks for quality of service support, Mobile Networks and Applications, 3, 101–119, 1998. [26] Sinha, P., Sivakumar, R. and Bharghavan, V., CEDAR: A Core-Extraction Distributed Ad Hoc Routing Algorithm, Proceedings of IEEE INFOCOM ’99, New York, May 1999, pp. 202–209. [27] Tang, Z. and Garcia-Luna-Aceves, J.J., Hop-reservation Multiple Access (HRMA) for Ad Hoc Networks, Proceedings of IEEE INFOCOM ’99, 1999, pp. 194–201. [28] Tang, Z. and Garcia-Luna-Aceves., J.J., A Protocol for Topology Dependent Transmission Sched- uling in Wireless Networks, Proceedings of WCNC ’99, 1999, pp. 1333–1337. [29] Terry, D.B., Towards a Quality of Service Model for Replicated Data Access, The Second International Workshop on Services in Distributed and Networked Environments, 1995, pp. 118–121. [30] Tsai, J. and Gerla, M., Multicluster mobile multimedia radio network, ACM-Baltzer Journal of Wireless Networks, 1, 255–265, 1995. [31] Xiao, H., A Flexible Quality of Service Model for Mobile Ad Hoc Networks, Ph.D thesis, National University of Singapore, Mar. 2002. [32] Xiao, H., Chua, K.C., Seah, K.G., and Lo, A., On Service Prioritization in Mobile Ad Hoc Networks, IEEE ICC 2001, Helsinki, June 2001. © 2003 by CRC Press LLC
  15. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com [33] Xiao, H., Seah, K.G., Lo, A. and Chua, K.C., A Flexible Quality of Service Model for Mobile Ad- hoc Network, Proceedings of IEEE VTC2000 — Spring, Tokyo, May 2000. [34] Xiao, H., Seah, K.G., Lo, A., and Chua, K.C., On service differentiation in multihop wireless networks, ITC Specialist Seminar on Mobile Systems and Mobility, Lillehammer, Norway, Mar. 2000, pp. 1–12. [35] Zhang, L., Deering, S., Estrin, D, Shenker, S., and Zappala, D., RSVP: A New Resource ReSerVation Protocol, IEEE Network, Sep. 1993, pp. 8–18. [36] Zhu, C. and Corson, M.S., A Five-Phase Reservation Protocol (FPRP) for Mobile Ad Hoc Networks, Proceedings of IEEE INFOCOM ’98, San Francisco, CA, 1998, pp. 322–331. © 2003 by CRC Press LLC
  16. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 29 Scheduling of Broadcasts in Multihop Wireless Networks Abstract 29.1 Introduction 29.2 Preliminary Information Set Covering Scheme • Independent-Transmission-Set (IT-Set) Jang-Ping Sheu Scheme 29.3 Two Broadcast Protocols National Central University Protocol 1 • Protocol 2 Pei-Kai Hung 29.4 All-to-All Broadcast Protocols National Central University 29.5 Simulation Results Chih-Shun Hsu 29.6 Conclusions National Central University References Abstract Broadcast is an important operation in wireless networks. However, broadcasting by naïve flooding causes severe contention, collision, and congestion, which is called the broadcast storm problem. Many protocols have been proposed to solve this problem, with some investigations focusing on collision avoidance yet neglecting the reduction of redundant rebroadcasts and broadcasting latency; while other studies have focused on reducing redundant rebroadcasts yet have paid little attention to collision avoidance. Two one- to-all broadcast protocols based on two schemes are proposed herein. The set-covering scheme reduces redundant rebroadcasts, and the independent-transmission-set scheme avoids collisions and reduces latency. Furthermore, an all-to-all broadcast protocol is presented based on the one-to-all protocol. Sim- ulation results show that the novel broadcast protocols are efficient and can achieve high reachability. 29.1 Introduction Due to advances in wireless communication technology and portable devices, wireless communication systems have recently become increasingly widespread. A wireless network is a collection of hosts that communicate with each other via radio channels. The hosts can be static, such as base stations in packet radio networks, or mobile, such as notebook computers in mobile ad hoc networks (MANETs). If all hosts in a wireless network can communicate with each other directly, the network is single hop; otherwise it is multihop. Broadcast is an important operation in all kinds of networks. However, due to the limited transmission range and bandwidth of a radio channel, the broadcast protocol must be designed carefully to avoid severe contention, collision, and congestion, known as the broadcast storm problem [10]. © 2003 by CRC Press LLC
  17. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Broadcast problems in wireless networks have been studied extensively in the literature [1–3,5,7–16]. References [1,5,7–9,12,13,15] attempt to design collision-free neighboring broadcast protocols and model the broadcast scheduling problem as a graph-coloring problem. The colors in the graphs represent the channels assigned to hosts (which could be slots, frequencies, or codes). Since no host can be assigned the same color (channel) as any of its neighbors within two hops, collisions can be avoided. The research goal of the protocols presented in [12,13] is to minimize the assigned colors (channels); while the protocols presented in [5,15] aim to increase total throughput. Using a different approach, two collision-free protocols for one-to-all broadcast are proposed in [2], one centralized and the other distributed. In the centralized scheme, the source host schedules the transmission sequence using knowledge of global network topology. Unlike the graph-coloring problem, a host can simultaneously use the same channel as its neighbors within two hops, as long as no collision occurs among the receiving hosts. However, in the distributed scheme, the source host follows the depth first search tree to pass a token to every host in the network. After receiving the token, the host realizes which time slots are collision free and can then decide its transmission sequence based on this information. The above broadcast protocols aim to alleviate the broadcast storm problem: some works [1,2,5,7–9,12,13,15] focus on avoiding collisions but pay little attention to reducing redundant rebroad- casts and broadcasting latency; other researchers [10,11,16] try to reduce redundant rebroadcasts but cannot guarantee a collision-free broadcast. Here, we propose two efficient one-to-all broadcast protocols, one for low mobility packet radio networks and one for high mobility MANETs. Both protocols use the set-covering scheme to reduce redundant rebroadcasts and the independent-transmission-set (IT-set) scheme to avoid collision and reduce broadcasting latency. The broadcast protocol for packet radio networks is efficient and collision free but depends on gathering the global network topology. While the broadcast protocol for MANETs is less efficient and cannot guarantee collision-free broadcasts, this protocol requires only information on two-hop neighbors. Additionally, an all-to-all broadcast protocol based on the one-to-all protocol is presented herein. Simulation results show that the broadcast protocols presented herein are more efficient than other broadcast protocols in terms of reachability, rebroadcast ratio, and broadcasting latency. The rest of this chapter is organized as follow. Section 29.2 describes the system model and introduces two schemes on which the proposed protocols are based. Section 29.3 then proposes two one-to-all broadcast protocols, while Section 29.4 presents an all-to-all broadcast protocol. Section 29.5 shows simulation results. Conclusions are presented in Section 29.6. 29.2 Preliminary Information The network is modeled as a graph G = (V, E), where V represents the set of nodes (hosts) in the network, and E is the set of links. If u, v ∈V, then an edge e = (u,v) ∈ Ε exists if and only if u is in the transmission range of v and vice versa (assuming that all links in the graph are bidirectional, i.e., if u is in the transmission range of v, v is also in the transmission range of u). The length of the broadcast packet is fixed. While broadcasting, no carrier sense and collision avoidance procedure is executed before trans- mission. The network is assumed to be connected. If it is partitioned, each component is treated as an independent network. 29.2.1 Set-Covering Scheme An example of the set-covering scheme is presented before describing it. In Fig. 29.1, broadcast source s has information on its one-hop neighbors (a, b, c, d, e) and two-hop neighbors (f, g, h, i, j, k, l). When node s broadcasts a packet m, the packet will be received by all of its one-hop neighbors. However, it is not necessary for all one-hop neighbors to forward packet m, since if only nodes a, c, and e forward m they can still cover all the two-hop neighbors of s. Set covering is designed to choose the minimum set © 2003 by CRC Press LLC
  18. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com f l k a e g s b j d c h i FIGURE 29.1 An example of a set-covering problem. of one-hop neighbors of source s required to cover all the two-hop neighbors of s. The nodes in the minimal set serve as the relay nodes of s, thus reducing redundant rebroadcasts. The set-covering problem is defined as follows. Let (s, α, β) represent an instance of the set-covering problem, where s denotes the source node, α represents a set of s’s one-hop neighbors, and β is a set of s’s two-hop neighbors. The set β can be represented as follows: U N ( h) β= h∈α where node h ∈ α is s’s one-hop neighbor and N(h) = {n | n ⊄ α and e(n,h) ∈ E}. The set-covering scheme aims to find a minimal set C such that the neighbors of the nodes in this set can cover all nodes in β. In Fig. 29.1, the minimal set C = {a, c, e} because the neighbors of nodes in {a, c, e} contain all nodes in β. Since the set-covering problem, which can be reduced to the well-known vertex-cover problem, is NP-hard [4], a greedy algorithm proposed in [4] is used to solve this problem. The greedy algorithm works by picking up a node that covers most of the remaining uncovered elements in β at each iteration step. The greedy algorithm is shown as follows: Algorithm: Greedy-Set-Cover(α,β) Input: α: a set of s’s one-hop neighbors, and β: a set of s’s two-hop neighbors. Output: Set_Cover: a set of nodes whose neighbors cover set β. Begin Set_Cover = {}; while β is not empty Select a node h ∈ α that maximizes the size of |N(h)∩β|; Remove nodes ∈N(h) from β; Add h to Set_Cover; end while return Set_Cover; End 29.2.2 Independent-Transmission-Set (IT-Set) Scheme Consider a three-layer graph as shown in Fig. 29.2, which is derived from Fig. 29.1, where the node in layer 0 is the source node s, and the nodes in layers 1 and 2 are the one-hop and two-hop neighbors of the source node, respectively. The set {a, c, e} in layer 1 is a minimal set covering all nodes in layer 2. Since nodes a, c, and e have no common neighbors in layer 2, they can broadcast packets simultaneously without any collision occurring in layer 2. Consider another case in Fig. 29.3. The set {b, a, c, d} is a minimal set covering the nodes in layer 2. However, nodes a and b have a common neighbor g in layer © 2003 by CRC Press LLC
  19. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com s layer 0 layer 1 a c e d b layer 2 g j f i l h k FIGURE 29.2 A three-layer graph derived from Figure 29.1. s layer 0 layer 1 a c e b d g j layer 2 f i h k FIGURE 29.3 An example of the IT-set problem. 2, and nodes b and c also have a common neighbor i in layer 2. When nodes a, b, and c transmit a packet simultaneously, collisions will occur at nodes g and i. Consequently, the independent-transmission-set (IT-set) scheme is used to arrange the transmission sequences of the nodes in layer 1 to avoid packet collisions in layer 2. The IT-set scheme works as follows. Given a three-layer graph, the set-covering scheme is used to find the minimal covering set, Set_Cover. The nodes in Set_Cover are then taken individually to decide their transmission sequence. When the first node is taken, it forms a new independent-transmission (IT) set, and the subsequent nodes are then individually taken from Set_Cover and compared with the nodes in the existing IT sets. If layer 2 contains no common neighbor between the taken node and the nodes in a compared IT set, the node joins the IT set. Otherwise, the node forms a new IT set. Once a new node joins an IT set, the IT sets are sorted in descending order by the number of nodes in layer 2 covered by the nodes in the IT sets. The IT sets with larger coverage numbers can be compared earlier and have a greater chance of including more nodes in layer 1. Since the nodes in the same IT set share no common neighbors in layer 2, they can broadcast simultaneously. Therefore, the IT-set scheme aims to minimize the number of IT sets and reduce broadcasting latency. For example, executing the IT-set scheme in Fig. 29.3 obtains two IT sets {a, c, d} and {b}. Initially, the first node, b, of Set_Cover is taken. Node b forms a new set, {b}, and node a is then compared to node b. Meanwhile, since nodes a and b share a common neighbor g in layer 2, node a forms a new set, {a}. Node c is then taken and compared with the two IT sets, {b} and {a}. Nodes c and b have a common neighbor in layer 2, but nodes c and a have no common neighbor in layer 2. Therefore, node c joins set {a}, and the IT sets are sorted according to their coverage of nodes in layer 2. The two ordered sets are {a, c} and {b}. Next, node d is taken and compared to the nodes in set {a, c}. Since nodes a, c, and d share no common neighbors in layer 2, node d joins set {a, c}. Finally, Set_Cover is empty, and two ordered IT sets are obtained, {a, c, d} and {b}. Before presenting our IT-set scheme, we define the following notations. Let ITS(i) denote the i-th IT set. A node in ITS(i) will delay the forwarding packets for i-1 time_units after it receives the broadcast packet, where a time_unit is the time spent to forward a broadcast packet. Let Coverage_of_ITS(i) represent the number of nodes in layer 2 covered by nodes in ITS(i). Num_of_ITS is the number of IT sets. The scheme is as follows: Algorithm: Find-ITS(Set_Cover) Input: Set_Cover: A set of nodes output from the set-covering scheme. © 2003 by CRC Press LLC
  20. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Output: ITS(): An array of IT set. Initial: Num_of_ITS = 0; Step 1: if Set_Cover = {} then Return ITS(); else Let f be the first node listed in Set_Cover; for i = 1 to Num_of_ITS if f and every node in ITS(i) have no common neighbor in layer 2 then Add f to ITS(i); goto Step 3; end if end for end if Step 2: Num_of_ITS = Num_of_ITS + 1; Create a new set ITS(Num_of_ITS); Add f to ITS(Num_of_ITS); Step 3: Remove f from Set_Cover; for i = 1 to Num_of_ITS Calculate Coverage_of_ITS(i); end for Sort ITS() in descending order according to the value of Coverage_of_ITS(); goto Step 1; 29.3 Two Broadcast Protocols This section presents two broadcast protocols designed for different network assumptions. In a packet radio network with low mobility, each node is assumed to know the network topology, and protocol 1 is proposed to schedule the broadcast by assigning each node a collision-free transmission sequence. In a MANET with high mobility, nodes are assumed only to have information about their two-hop neighbors; protocol 2 is proposed to broadcast packets rapidly and efficiently. 29.3.1 Protocol 1 A collision-free one-to-all broadcast protocol (protocol 1) is proposed for a packet radio network in which the network topology is known in advance. Using global information, each node can run protocol 1 to determine its transmission sequence and calculate waiting time, and thus each node knows when to forward the received packet to avoid collisions. The assignment of the transmission sequence for each node originates from the source node. The source node is first assigned to transmission sequence “0” and its one-hop and two-hop neighbors are then put in sets α and β, respectively. The set-covering scheme is applied in each round to pick the minimal relay nodes in α, and the IT-set scheme is used to divide the minimal relay nodes into different transmission sets. After applying the IT-set scheme, nodes in ITS(1), which has the largest coverage number in the IT sets, ITS(), can transmit immediately when they have received the broadcast packet. Accordingly, nodes in ITS(1) are scheduled and assigned to transmission sequence “1”, while the nodes in the other IT sets are put into set U and considered in the next round. Before executing the next round, the nodes in α and β must be updated as follows. The nodes in U and the unscheduled nodes connected to the scheduled nodes in ITS(1) are put into α, and the unscheduled nodes connected to nodes in α are put into β. In the next round, the set-covering scheme and IT-set scheme are repeated. Nodes in ITS(1) are assigned to the following transmission sequence, © 2003 by CRC Press LLC
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2