Integrated OTN Switching Virtualizes Optical Networks

A big upgrade of any system requires a full reboot; the
changes are so extensive that a rebuild from the ground up is unavoidable.
Optical networks are no different, and such a change is imminent.
The broader equipment supply chain and investment community are now aware of the transformational event that will take place
in 2013 – 2014 – what we first dubbed the optical reboot.”

The introduction of coherent 100Gbps (100 Gigabits per
second) optical transport is a key catalyst; the technology offers massive performance gains over incumbent technologies to extract
more fiber capacity, but requires that carriers undertake greenfield builds to extract maximum return. Beyond achieving cost per bit
gains with 100Gbps, carriers are seizing this opportunity to roll out new architectures that will further improve the total cost of
ownership of their networks. The most important of these is a more sophisticated and efficient architecture for switching and managing
these optical networks, which will use 100Gbps wavelengths efficiently and
allow carriers to meet future operational cost targets.

The ITU G.709 standard for OTN (optical transport
network) is the protocol of choice; it will bring the efficiency, reliability,
and predictability of a transport-based approach and meet the infrastructure
requirements of packet networks. OTN offers many features that will help facilitate the transparent transport of tomorrow’s
growing packet traffic and support services such as private line Ethernet or
wavelength services requiring hard service level agreements. OTN is
just as important to the optical reboot as 100Gbps and coherent optics.

Some in the industry have talked about eliminating the
OTN switching layer and deploying a pure IP or MPLS solution with colored
optics feeding into a simple WDM line system. These ideas mimic
IPoDWDM architectures proposed 10 years ago that were never deployed in material volume. Our discussions with service providers
indicate that the transport layer – specifically with OTN switching – is and will continue to be a fundamental part of future network

Service providers may desire to deploy a hybrid MPLS and
OTN solution in the future, but today all 100Gbps WDM networks are being built using OTN transport, and as service providers build
out these networks, most plan to employ OTN switching, which eventually will underpin the efficient convergence of packet and optical
transport functionality into a single network layer.

Virtualization for optical networks

Virtualization is the practice of abstracting a pool of
common resources for use by multiple services – virtualization has revolutionized computing in enterprise and large datacenter
applications. Virtual machines (VMs) in the data center allowed computing
resources (hardware) to be decoupled from applications/services,
improving hardware utilization and efficiency to save capex. VMs also reduce
opex by bringing centralized management of these hardware
resources and services. Services with different priorities cab be quickly
shifted and cloned on top of a homogenous hardware resource pool.

These concepts have been so successful inside the data
center that carriers would like to extend them into the network, allowing them to deploy any service into a pool of optical resources,
where those optical resources can be quickly increased and decreased on demand.

We need a transport protocol that is service-independent
to realize this vision of a virtualized optical network – something able to carry everything from legacy SONET/SDH to Ethernet, MPLS, IP,
and other protocols such as Fibre Channel. 
There must also be a method for partitioning the massive bandwidth of
100Gbps channels or 1Tbps (1 Terabit per second) superchannels to efficiently pack a mix of protocols and speeds from
multiple customers.

OTN transport provides an elegant solution for carrying
and managing traffic in such a virtualized optical network by providing a standardized digital wrapper that can carry a wide range
of services transparently across the network. OTN switching adds the key features for partitioning and grooming bandwidth among
services with varying capacity, latency, and reliability requirements, and most importantly enables the multiplexing of many low-medium
data rate protocols into efficiently filled 100Gbps wavelengths. Put together they form the optical transport technology that is the
bedrock for future networks.

OTN transport

There is some confusion about OTN, as the term is often
generically used to describe three functions: OTN transport, OTN multiplexing, and OTN switching. They are not interchangeable; for
example switching is a superset of transport functions, and it is worth
examining the differences.

OTN transport has been in use for a decade, even earlier
if one considers the early implementations of G.975 framing in SLTE
applications in the 1990s. G.709 OTN was originally defined as a
point-to-point protocol, designed to provide a protocol-independent wrapper of client data. The objective was to use a single homogenous
protocol to wrap/containerize various clients, providing 100% transparent transport, something SONET/SDH was incapable of for
Gigabit-speed services such as Ethernet, Fibre Channel, and wavelength

Transparency is an essential element of transport
networks; it means any protocol can be delivered with guaranteed bit rate and
no change to the payload whatsoever. OTN is prized as a way
to carry someone else’s bits without modification.

Wrapping the source signal in OTN also allows operators
to add extra bits – for operations, administration, management, and provisioning (OAM&P) – to the container and cleanly remove them at
the destination. G.709 transport added these needed management features for today’s networks where signals might transit multiple
operator networks during transport. OTN transport provides a perfect serviceindependent, network-to-network interface for transiting the multiple
transport domains within a large service provider or among multiple providers. As more operators adopt OTN, particularly the
switching features, it will become the common currency for transport handoffs between different carriers


Muxponders: the good and the bad




Not long after G.709 gained traction, companies extended
the protocol to wrap not just a single client, but multiple clients, allowing a single 10Gbps OTN container (OTU2) to carry multiple
sub-containers with a mix of clients such as OC-48, Gigabit Ethernet, and Fibre Channel. This is OTN multiplexing, and it allowed OTN
hardware to move beyond a point-to-point transponder role to multiplex data between two points (hence the term muxponder”).

Muxponders provided a vital function in the last decade
as 10Gbps costs dropped and these wavelengths could be repurposed to carry the majority of clients between 1Gbps and 2.5Gbps. Our
research indicates 85% of all deployed 40Gbps links use 4x10Gbps muxponders.

But muxponders are a static solution; to make any changes
to incoming and outgoing OTN containers, a human must change the fiber connections. To interface multiple WDM links, more
muxponders are placed back to back and more complex cross cabling are required.



The shortcomings of the muxponder approach become visible
as more are deployed. What was once a cost-saving solution to squeeze maximum efficiency out of a point-to-point link becomes
unwieldy in scale, and an increasingly inefficient way to transport services as

WDM data rates scale from 10Gbps to 40Gbps and 100Gbps
and beyond. A node with multiple muxponders must be manually patched, adding capex and opex, and the architecture can’t handle
a dynamic mix of <=10Gbps services with 10Gbps, 40Gbps, and 100Gbps wavelengths in concurrent use.

Worse, muxponders provide only point-to-point transport
capability, and even when paired with ROADMs, they do not efficiently allow the grooming of services within wavelengths or between
wavelengths. Optical switching via ROADM only allows wavelengths to be switched and cannot add, drop, switch, or even monitor the
multiple clients that might be sharing the capacity of a 100Gbps wavelength.



Muxponders also lock operators into the long legacy of
manual provisioning, an error prone process that cannot adapt to the
efficiencies required of future meshed networks. As optical transport
networks become more meshed in an increasingly connected world, large deployments of static, inflexible muxponders perpetuate
an architecture that wastes capacity, is difficult to manage, and cannot evolve into what carriers want – a virtualized mesh based optical

Integrated otn switching to the rescue

Muxponders and G.709 provided the technology foundation
for OTN switching by defining a protocol that allowed multiple clients to be transparently bundled into uniform containers and sent on
a single wavelength. OTN switches represent a quantum leap in architecture by a common electrical switch fabric – OTN switching – for
hundreds or even thousands of wavelengths and clients to cross-connect at a particular node.

The first advantage of evolving from transponders and
muxponders to OTN switching platform is the decoupling of clients from the
wavelength transport interface. Rather than having a bundle of 4 or
10 client ports hard-wired to a single WDM line interface, each of the clients
can be individually routed to any WDM interface or client port
on the system. Even the few operators with no intention of using network-wide
OTN switching find this capability valuable, as clients can
be remotely provisioned without sending engineers to patch fiber cables.



Decoupling the clients from the transport interface also
increases network efficiency. We live in a world where the dominant service interfaces sold by carriers are 10Gbps and lower and will
be for many years, because the economics of 10Gbps clients remain tough to beat. 100Gbps is just too big a pipe for all but the
largest enterprises and data centers. OTN switching allows multiple 10Gbps signals from disparate locations to be efficiently packed
into these higher speed wavelengths, enabling fiber capacity scaling while simultaneously maximizing bandwidth utilization and
efficiency. This is important for 100Gbps networks, but becomes vital if
flexiblecoherent schemes are used to implement superchannels at even
higher data rates such as 200Gbps, 500Gbps, and 1Tbps.

The scaling issues of muxponders become apparent if, for
example, 100 10Gbps interfaces needed to be hard-wired into a 1Tbps superchannel. The reality is this superchannel – comprising
a hierarchy of sub-containers, and using a width of multiple optical channels – would carry a mix of everything from Gigabit
Ethernet and Fibre Channel (ODU0), OC-48 (ODU1), 10Gbps (ODU2), perhaps a stray 40Gbps client (ODU3), and several 100Gbps trunk
lines (ODU4). This 1Tbps of traffic would originate at multiple client and line WDM interfaces – an integrated OTN switch fabric on the
node with the 1Tbps superchannel is the only way to realistically bundle 1Tbps worth of smaller ODU containers. Many of these services
may have different origination and termination points, and thus if they were assigned to muxponders there would be no way to
efficiently groom them onto a common superchannel, thereby requiring
overbuilding of more inefficiently filled WDM capacity.

Benefits of virtualized optical networks

Networks built with integrated OTN switching bring the
economic and operational benefits of virtualization to optical networking.
These benefits are found in four areas: capacity, service
velocity, provisioning, and failure restoration.

Capacity: As illustrated earlier, OTN switching decouples
the clients from the WDM line interfaces, allowing greater network efficiency by ensuring that the more costly WDM links are running as
hot as possible and that no stranded bandwidth remains. A network of OTN switches takes this concept further, allowing traffic to
be aggregated at intermediate nodes and directed towards underutilized routes.

Though the nonstop flight from Boston to San Francisco
may be full, you might find unused capacity on the multitude of flights with connections. OTN switching allows traffic to move through
intermediate network nodes transparently and puts unused capacity to work.

Service velocity: Virtualization allows new services to
be quickly added in the data center, and OTN switching does the same for
transport networks. The presence of switches at each node makes it
easy for new clients to be attached without concern to which muxponder card is being addressed. Service changes can be processed just as
easily and released capacity returned to the pool of virtualized optical


Provisioning: There may be special requirements for
clients requesting capacity from a virtualized optical network. As new services
are turned up latency, protection, and policies must be
considered when the connection path is computed. Integrated OTN switching
allows a mesh based approach to provisioning, allowing multiple
clients sharing the same virtual optical network to take paths that meet each customer’s specific requirements. Financial traffic can
be provisioned on the lowest latency route, for a premium. Intra data center
traffic can be provisioned with deterministic latency, so that
multiple available paths fall within specified constraints. Government or
military traffic can be constrained to avoid certain areas and
identical paths through the network. The presence of OTN switching throughout
the network creates a very meshy resource, which gives the
control plane more options to meet specific customer requirements.

Restoration: Network connections inevitably fail, and the
same control plane that originally provisioned the service must re-route
traffic upon the loss of a connection. Some or all of the same provisioning
requirements must be met again, but with fewer transport resources available this time. Again, having a meshy network with ubiquitous
OTN switching on all available resources will allow the most optimal solution
to be generated. OTN switches can be rapidly reconfigured, much
faster than ROADMs, allowing restoration to take place as quickly as possible.



The Ideal platform for today: integrated otn

New integrated systems that combine WDM optics and OTN
electrical switching without any system density compromise are changing long-held architectural assumptions in favor of
integration – first for OTN switching and WDM, and later for MPLS.

We’ve already discussed the benefits of decoupling client
and line optics and moving away from the inflexibility and cost of muxponder architectures. But how much crossconnect switching is
required in a network?

Ideally OTN switching should be integrated into every
node in a core or regional transport network. This enables maximum transport efficiency by making it possible and easy to completely
fill the 100Gbps, 200Gbps, or 1Tbps trunk lines and eliminate as much stranded bandwidth as possible, especially true when compared to
using muxponders. Such a configuration also creates the required connection paths between nodes, giving the control plane
meshiness – the least number of constraints to meet connection requirements.

Historically most hardware manufacturers divided the transport
functions and switching functions into two separate hardware platforms.

This creates additional cost to connect the two adjacent
systems with short reach optics, significant duplication of hardware functions, and additional opex due to the use of multiple platforms
along with the additional space and power these consume. These costs tended to drive solutions that took advantage of economies of
scale, resulting in very large switches in a handful of nodes. This kept costs
down, but limited the nodes where the benefits of switching are
available, and tended to shift costs to higher layers of the network.

Putting both transport and switching into the same
hardware would result in the best solution but combining the two functions
historically resulted in compromise. The biggest problem has been the
mismatch between short reach gray client optics density and long haul WDM optics density. Traditional discrete WDM optics use more
board space and burn more power than compact XFP or SFP client optics. A chassis capable of multi-Terabit switching – if all client
gray optics are used – might see effective total capacity cut 30 – 50% if WDM optics are deployed. Because of this carriers such as
AT&T took the expected route – they bought SONET switches with 100% short
reach optics to achieve maximum density and pushed the WDM
transport functions externally. These same problems were evident again with the introduction of OTN switching in the core and
regional networks.

An ideal platform integrates WDM transport and switching
functions, connected via a more cost effective electrical backplane, and eliminates the short reach optics and duplicative
hardware and opex. If DWDM transport and switching functions can be combined without compromise at the right cost, integrated OTN
switching will be more ubiquitous and touch more traffic. OTN switch solutions with the highest WDM line card density – specifically those
that have no density penalty versus short reach client line cards – resolve the mismatch issue that forces the equipment to be separated.
Matching client and line WDM density is a critical milestone that would allow carriers to rethink the traditional hardware partitioning
of transport and switching.



One could build a combined OTN and MPLS platform, but the
density and cost would be constrained by the components and power required for
MPLS functions. OTN line cards can be smaller and cheaper than
an MPLS capable card, since the header processing requires buffering and
specialized NPUs and memories. The resulting combined machine would
lack the density of pure OTN switching platforms and cost more, resulting in
carriers taking the historical route of separating these

Within a few years, silicon density improvements will
allow the combination of WDM, OTN switching, and MPLS label switching in
large-scale platforms designed for core and regional networking. In
the meantime, some vendors have adopted new hardware architectures based on
switch fabrics that can eventually support both OTN and MPLS
once the line card density and cost issues are conquered. We believe that
transport teams that deploy these forward-looking architectures will be
better prepared for deploying MPLS transport/switching alongside OTN

The optical reboot triggers integrated otn switching

Carriers view the transition to 100Gbps coherent
networking as a once-a-decade opportunity to reboot network architectures.
Operators are using this opportunity to introduce meshed networks via
integrated OTN switching, and with it, new provisioning, restoration, and
utilization efficiencies.

OTN switching provides an automated way to manage the
mismatch of client services that are predominantly 10Gbps and below, with the
rapidly scaling WDM line side capacity that is moving to 100Gbps
and beyond. Integrated OTN switching also can bring the popular concept of
virtualization from the data center to optical transport networking by
separating the client interfaces from a common WDM transport resource pool. In
doing so, it allows more efficient use of these resources by
maximizing utilization and moving beyond the architectural constraints of
muxponder architectures.

Our interaction with carriers illustrates that most
operators plan to deploy OTN switching, and those carriers that do account for
almost all of global service provider capex. Though a few carriers plan to
eschew OTN switching, we believe they have unique networks not representative
of the mean.

Though the benefits of mesh based networking are clear,
hardware limitations previously forced compromises and resulted in sub-optimal
network architectures to reduce first year capex costs. New
integrated systems that combine WDM optics and OTN switching without any
compromise in terms of system density are changing long-held architectural
assumptions in favor of integration – first for OTN switching and WDM, and later
for MPLS.

Integration without compromise will allow OTN switching
to be pervasively deployed throughout the network, dramatically increasing
optical network efficiency and providing an optimal foundation for
virtualized transport networks with the lowest lifetime cost of ownership.



Andrew Schmitt, Principal Analyst, Optical Infonetics

[email protected]