The 128T Networking Platform (128T) is designed around modeling the applications that your users consume. We call this service-centric networking, a top-down approach to configuring your routing infrastructure. Rather than using IGPs to exchange routes and ACLs to restrict access, you describe the services within the network and the group(s) within the network allowed to access each one.
We’ve talked before about the 128T platform data model’s service-centric approach to network design. But we didn’t design the data model this way simply to be contrarians; we believe that organizing your network around the services it hosts offers many advantages to network architects, security professionals, and network operations personnel.
Modeling your network’s services can include describing the service policies which apply to them. For example, your voice service needs a low latency, low jitter link to your provider or PBX; on the other hand your nightly customer database backup, while mission critical, is tolerant to jitter and latency but needs high bandwidth. Both of these services, the voice service and the database backup service, use the same topology but ask very different things of the network.
Within a 128T network design, each service, with its own requirements, can leverage the network infrastructure uniquely. And not just one hop at a time, either: a collection of 128T routers can exchange topology and network conditions with one another to ensure they make informed decisions on how to organize the sessions holistically.
An important attribute of a service is its network resource requirements. That voice service mentioned earlier uses a predictable amount of bandwidth for each call (typically 64kbps in each direction), but not every network service consumes resources symmetrically. The database backup between your datacenters uses a lot more bandwidth in one direction than it does in the reverse. Service-centricity accounts for the asymmetric nature of bandwidth consumption; there may be ample bandwidth for you to browse the titles available for viewing on Netflix, but is there bandwidth available for that 4K video to stream to your home without stuttering and video artifacts?
When your network is service-centric, it anticipates the types of flows that comprise a single session of that service, and chooses a path appropriate based on current network conditions for both the forward path and the reverse path. The service state retained by the network elements monitor active bandwidth utilization across their links for all sessions, and can factor that into future routing decisions. A link that has enough bandwidth for another audio call may not have enough for a two way videoconference, or it may cause your database backup to take much longer than is necessary.
The 128T Networking Platform uses simple rules for associating inbound traffic to services: it can either be configured by a system administrator (by associating address or prefixes to services) or it can be learned by the 128T platform (by passive monitoring of network traffic, such as X.509 certificates and DNS traffic). As traffic arrives at the system and is associated with a service, thereafter the system will use that service’s name for making routing decisions. This name frees you from specific topology restrictions (in particular, private network boundaries), and lets the network route the session to the nearest/lowest cost instance of that service.
Let’s say you run two large data centers, one on either coast of the United States. Packets arriving at a 128T router are associated with a service named “mailserver,” which you host in both data centers. The network will use this name to construct paths to the servers that terminate this service. The destination address supplied by the client need not be associated with the west coast data center, nor the east coast data center specifically – the 128T network will deliver it to the closest location even if the servers have the same address.
As a network administrator, your primary mission is to keep things running so that your users can access the network resources they need to do their job. But managing network access typically involves a cryptic morass of route statements, firewall rules, various VLAN tags, and the like.
By grouping network resources as services, it lets you see at a glance the way information is passing over your network. You can make informed decisions on how to prioritize your network traffic based on a rich visualization of the way your users exercise your infrastructure.
When your network model is organized around the applications that users are accessing, you can gain significant advantages in how your infrastructure is utilized: identifying and prioritizing pathways through the network to optimize the delivery of user traffic. Modeling tenants and services, and associating them to one another to coordinate how traffic is allowed to traverse the network, lets you view, control, and prioritize your network in a new way. Break free of the “default route everything, and let firewalls sift through it” mindset, and organize your routed network with a new set of tools.