News

Beyond the OSI model

Tom Nolle

The 7-layer OSI model" was born in the mid-1970s, when an organisation called the "CCITT" (Consultative Committee for International Telephony and Telegraphy, now the ITU-T) published a document called the "Basic Reference Model for Open Systems Interconnect." Over time, the higher layers of the model have been discussed less and less often, and some of the lower layers have been exploded into sub-layers.

But OSI terminology persists to this day, and many of the foundations of the model are as valid now as they were more than 30 years ago. Foremost among those "still-relevant" ideas is the notion of layered protocols.

The OSI model indicated that it was best and easiest to conceptualise a "network" as a set of layers whose structure compartmentalised issues and standards associated with specific missions. For example, the "Physical" layer (Layer 1) addresses the way network data is impressed on physical media like copper, fiber, or RF. A higher layer uses the facilities of the layer below it, which ensures that standards build on each other and don't replicate (or contradict) other work.

The model also created a boundary line. Layers one through three were "the network." Layers four through seven were resident with or in the user domain, and thus were consumers of the network. That boundary is as valid today as it was in 1974.

Layered protocols are important to network operators for two basic reasons: They allow operators to offer network services

Requires Free Membership to View

at different layers to match user/application requirements; and they allow for effective partitioning of network technology and planning/management issues. Protocol layers also introduce an interdependence of technology that may not be fully accommodated in the network management process, however, particularly in fault, configuration and performance management.

Finally, while virtually all networks have Layer 1 through 3 technology deployed, the technology at a given layer may be combined with that of higher layers into a single device, or divided so that each layer has its own independent device complement. This obviously raises the cost of equipment, and so requires some justification in service or operations benefits.

As protocols get smarter, where should functions live?

Historically, lower protocol layers (below Layer 3) have been relatively "dumb," and most functionality in the network necessarily migrated to higher layers. The maturation of standards like Carrier Ethernet (a Layer 2 architecture) have raised issues in managing layered networks about where specific functions/features should reside in the first place.

Provider Backbone Transport (PBT), for example, offers good traffic engineering and route management facilities. These are also offered by MPLS. In a network that implements IP over Ethernet, which layer does traffic engineering? The answer may lie in the specific benefits each layer can bring, but also in whether a decision to offer network services at a given layer carries an implication that layer will therefore have traffic management and QoS. A network that expects to offer QoS for Ethernet services cannot easily cede traffic engineering to MPLS because high-layer features are not available to layers below, only the other way around.

Optical enhancements such as reconfigurable optical add-drop multiplexers (ROADMs) and hybrid Optical/Ethernet switches have increased the intelligence of what was once a "dumb" set of pipes to the point where some network operators prefer to do fault recovery at the optical level and even to offer pure optical-based services.

The question of what services are to be offered from a network is fundamental to the decision of which layers to emphasise when building the network and which features are to be resident in which layers. A rich Layer 2 service set implies specific Layer 2 features will be needed, and some of these features may then reduce the need for comparable capabilities at Layer 3. However, it is also true that many Layer 2 services can be deployed over Layer 3 networks. IP pseudowires can mimic virtual circuits and leased lines, and VPNs can mimic virtual LANs. Thus, it may be that the traffic and revenue balance expected among the layers will make the final choice on how to deploy services.

The geographic scale of a network may also impact the decision of whether to build robust Layer 1 or 2 networking under a traditional IP (Layer 3) structure. Providing optical redundancy may mean creating ring structures or using ROADMs to create a reconfigurable optical network. This is practical in a confined metro geography, but less so on an international scale.

Problem management of any sort can be complicated to an almost catastrophic extent by layered protocols if care is not taken to avoid problems, beginning with network planning and focusing especially on management practices. Remember that according to OSI principles, any layer of the network uses only the services of the layer below. When a problem occurs, the failure is necessarily propagated up the stack, but the ability to remedy it may not be.

For example, a fiber cut is a failure of the physical medium, Layer 1, but it will break the data link (Layer 2) and also break the network connectivity (Layer 3). Worse, there may be several independent higher-layer connections running over the fiber, and each will be disrupted, generating alarms and possibly undertaking adaptive remediation measures. The resulting "alarm flap" can swamp an operations center.

Avoiding this kind of unhelpful interdependence requires careful design. The general rule is to allow error conditions to be recovered at the layer they occur, then escalate the recovery if that is not possible. This means setting higher-level recovery processes to "time out" over a longer period to allow the lower layers to respond. Fault correlation tools are also used to insure that redundant reports of higher-level failure are suppressed when the lower-layer fault has already been reported.

Equipment vendors' quest for feature differentiation and value augmentation has created more features at virtually every protocol layer in the network, and with this has come increased overlap in the role of the layers. The benefits of providing services and managing features at multiple protocol layers are sure to increase as a result of this, but so are the challenges of managing the growing interdependence and complexity.

About the Author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specialising in telecommunications and data communications since 1982.