News

Is 100G Ethernet for real?

Richard Chirgwin

Back in March, Juniper joined with Infinera, Finisar and Opnext in an interoperability demonstration passing traffic between Juniper’s router and optical modules from multiple vendors.

“The optics is the pointy end of the stick,” Miller told SearchNetworking. “What we’re seeing is the standardisation of the optical packages to conform with the 802.3ba standards.”

In particular, he said, the March demonstration showed that the router, the optical modules, and DWDM systems were able to pass packets at 100 Gbps.

As nearly always happens in the world of Ethernet, vendors have started delivering systems ahead of ratification, which is currently expected in June, but Miller said vendors are increasingly confident of the form the final standards will take.

He said that as standards have solidified, the industry is forming a better understanding of the two key target markets for 100 Gbps Ethernet – the telecommunications carriers, and data centres.

Carriers, Miller said, are seeing unprecedented bandwidth consumption on their networks as they enable new services.

“In particular, mobile is going through the roof, especially with the uptake of LTE ... even 3G networks and services are putting a lot of pressure on their backbone networks.” That pressure will only grow with the advent of 4G networks – and on the fixed network, ISPs are increasingly deploying IPTV networks.

To support the explosion of traffic over interfaces like 10

Requires Free Membership to View

Gbps Ethernet, the carrier has to deploy a huge number of interfaces and links. Apart from cost, the traffic also suffers if it’s carried on aggregated links.

“Link aggregation adds processing to the forwarding engines, the interfaces cards – in lots of different ways,” Miller said. “And to support the new services, carriers are adding large numbers of 10 Gbps interfaces very quickly.”

Link aggregation at the 10G level has adds both latency (because of the extra processing required) and jitter – both of which are troublesome for services such as video which are growing in popularity.

“Aggregation is one of those things everyone wants to move away from, to provide a cleaner pipe,” Miller said.

The technology is also attractive to the data centre market, particularly as service owners try to solve the problems posed by huge traffic volumes that need to be moved on and off the storage farms.

Miller said that Fiber Channel over Ethernet (FCoE) is sub-optimal on 10 Gbps interfaces, when the Fiber Channel interfaces run at 2 Gbps, 4 Gbps or 8 Gbps, and Fibre Channel is cheaper than the 10 Gbps Ethernet network.

“So at those speeds, why would you cannibalise the Ethernet network for Fiber Channel? At 40 Gbps and 100 Gbps, FCoE becomes viable,” he explained.

The other key aspect of the demonstration, Miller said, is that it showed that 100 Gbps systems can be made to run at line rate even for traffic with a huge variety of packet sizes.

“In the data centre, you have a huge mix of packet sizes – all the way up to jumbo packets. Small things like that are very important. If you can’t do line rate at every packet size, you introduce latency, you put bottlenecks into the network.”

Juniper’s play in the 100 Gbps Ethernet market focuses on its T1600 core router, which Miller said “has a good incumbency in carriers, who are now seeing 100 Gbps as an increasingly viable step.”