Blog: Data Center Interconnect

By Patrick MeLampy |

Many CIOs are now using third party data centers to augment and extend their current capacity. As enterprises struggle with power, cooling, knowledge workers, and physical security, outsourcing to third party owned and operated data centers just makes sense. Full-featured data centers like Amazon Web Services have complete application lifecycle management, while others simply provide power, cooling, rack space, and networking.

The big trend now for application owners is to be closer to the customer, and in more locations. It’s not uncommon for customers to use as many as 20 data centers for a single, cloud-based application. Being in more places reduces the risk of failures associated with one location, and improves performance and scale at nearly zero incremental cost. These applications use common databases and share information between instances at different locations. This creates a very large amount of data that must be networked between data centers.

Conventional wisdom is that the volume of traffic between users and servers (so called “north/south traffic”) is a tiny fraction of the volume of data between servers that collectively comprise an application (“east/west traffic”). As applications are distributed to more and more data centers, this has a corresponding increase in the volume of data exchanged between servers at different data centers. East/west traffic is nearly always encrypted by the application owner and sent on private network addresses to minimize the attack surface area. Private network addresses are not routable between data centers, so tunnels or private networking connections are used. Most large data centers sell dark fiber between their locations — this can be provisioned as a private network connection between application servers.

 

 

With all this information in hand, we can say that Data Center Interconnect (DCI) is a growing business. Today’s solutions range from lighting up dark fiber, purchasing an Ethernet pseudowire, purchasing an MPLS wide-area connection, or using a networking service offered by the data center itself. All of this essentially creates a private network space that extends into all the data center locations hosting the application, extending the broadcast domain for a specific customer over a large geographic area.

At 128 Technology, we believe a less complex, more secure alternative exists. We believe that by using a new architecture called Secure Vector Routing, IT managers can join any number of servers on private networks across any routable IP network. We believe that IP routing can and should be used between data centers to connect clusters of servers — rather than stretching VLANs, ARP proxies, or Ethernet pseudowires. We also believe that with semantic routing, polices are written in words that can be distributed universally across all data centers — providing excellent segmentation control.

128 Technology is rethinking routing and how application servers can communicate within multiple data centers. And we’ve developed a new network architecture that is dramatically simpler than what’s come before.

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Start typing and press Enter to search