What Is Interconnection And Why Is It So Important To Enterprises?

By Alex Hawkes|25 January, 2021

Enterprise network connectivity has evolved in line with changing business needs over the last few decades and as we saw with the sudden shift to remote working in 2020, the evolution cycle is speeding up in response to environmental change.

This makes interconnection more important than ever to the modern enterprise.


What is interconnection? 

With the adoption of public cloud and hybrid cloud network infrastructure models, organisations have a need to connect their private clouds or data centers to each other and their public cloud instances, or to connect multiple public cloud instances together.

flex-and-control-on-grey-1This interconnection allows businesses to optimise the sharing of data and resources from multiple sources, including processing power, storage, and data archives.

Interconnection provides low latency, high availability connections that enable companies to reliably transfer data between these assets.

  • Dedicated or direct interconnect provides a physical connection between assets, maximising the security and performance of the network while increasing cost.
  • Virtual interconnection is more cost effective and faster to deploy but typically doesn’t offer the same level of performance and security as it may combine multiple underlying network infrastructures.

Networks in the 1990s and early 2000s were built on predictability.

  • An organisation had a pretty good idea of the number of employees it needed to facilitate service for and which locations those people would be working out of.
  • It knew where the bulk of the business activity and data generation would be taking place geographically and who would need access to that business data.
  • It knew which applications the network would have to service, where those applications lived physically, and roughly how much data they pushed and pulled over the corporate LAN and WAN.

Long distance and international transit capacity was orders of magnitude smaller than it is today, although the primary challenge was a lack of site connectivity. The most efficient way of transporting large amounts of data from one location to another - typically for backup purposes - was a truck full of magnetic tapes.

The network, and by extension the internet, was both the cause and the symptom of a slower pace of business. Enterprise applications were custom-built and lived on custom hardware in the ‘server room’ behind that locked door that always radiated heat in the business HQ or one of the key office locations.

The people that used and maintained those applications were in the same building or at least on the same campus. IP telephony was rudimentary and Unified Communications, chat and collaboration tools were not even considerations.

Core applications such as ERP or CRM were hosted in these private data centres and interconnection then meant ensuring servers in different closets were connected to each other if required and that key teams had access to the applications on those machines.

Small numbers of road warriors existed and while they could dial into the corporate network over VPNs they were more limited by the access technology available at the time. Residential ‘broadband’ meant something different, wireless was still in its second generation, and ERP and CRM tools were too clunky to access over dialup.

The rise of MPLS

MPLS became the de facto workhorse for enterprise networking and interconnection.

Reliable and predictable in terms of performance but expensive and lacking in agility. The former is always a problem of course, but the latter only really became a problem when the cloud crossed the horizon, applications were hosted in more disparate environments, and interconnection of those environments became a strategic necessity.

character-woman-standing-laptopBy this point the LAN and WAN had become a veritable spiderweb of application-specific boxes sitting on costly dedicated links that had just about enough capacity. Operational costs skyrocketed as the network managers would spend all their time ‘managing’ by way of manually configuring routers and appliances, firewalls, WAN optimisers, packet capture and inspection, and network analysis tools.

MPLS: solid but lacking agility

MPLS guaranteed predictable performance, low latency and minimal packet loss, but it was expensive, which made it capacity constrained, and very slow to deploy.

The interconnection process of connecting another site or two office locations together was a process that would take several months as your chosen network provider(s) would have to physically run cable between your spoke endpoints and the hub site that housed the data centre.

The story wasn’t much different if you already had interconnection in place and just wanted to upgrade your bandwidth.

This hub site or data centre location was also most likely to be where the Internet Gateway resided and so all requests for internet packets from any location were tromboned back from the spoke sites to the data center and back again over the expensive MPLS network, using up precious bandwidth with non-critical requests.

There was of course the option to use public internet to connect remote locations but that was a no-go if you needed reliability, security, or low-latency.

Then came cloud

As the cloud boomed in the first two decades of the 21st Century it did so in parallel with network operator investments into their infrastructure and the corresponding boom in data centre and co-location facilities.

There was a school of thought that predicted a wholesale migration to the cloud - storage was so cheap and accessibility good enough that the expectation was companies would just lift and shift their data and applications wholesale into public clouds with commodity hardware and let someone else worry about maintenance of the equipment.

Although this happened in some cases, mainly with brand new companies that carried nothing in the way of legacy technical debt, the digital transformation story for most enterprises would see them adopt a hybrid network model, slowly shifting applications they could into the cloud and maintaining their on premise data centres for apps they didn’t want to, or simply were unable to, move. This influenced the requirements for interconnection.

A second challenge is that MPLS struggles to adequately support the highly accessible nature of the public cloud because it needs a pre-configured termination point and end-to-end bandwidth management of the connection.

This is easily done in the corporate data centre and sufficed when heavy applications were all backhauled through a managed network to central site, but as the public cloud is owned and operated by other organisations, deploying appliances is not an option, neither is managing the bandwidth on other organizations’ networks.

Learn why Gartner® believes 30% of enterprises will employ an SDCI

From hub and spoke to a cloud mesh

The traditional hub and spoke model also doesn’t lend itself well to the more nimble nature of the cloud.

There is little point in adopting SaaS applications to help your business’ agility if you end up hauling the traffic to and from those clouds through your private data centre. The variable latency and possible congestion may well wipe out any benefits.

The arrival of the cloud brought with it a demand for direct connectivity and interconnectivity that is still growing year on year, with no sign of slowing down.

For sites located in key business areas, there is a good amount of competition among infrastructure suppliers and affordable high bandwidth services. While for those sites located outside of key business centres, there is much less competition and significant cost associated with upgrading connectivity remains.

Software-Defined Networking changes the game

Over the last several years, Software-Defined Networking (SDN) has gone from a much anticipated technology promise to create commercially viable services that are being widely adopted.

characters-group-laptops-1When it comes to connecting sites, including those remote locations, SD WAN boosts business agility by enabling organisations to expand their branch sites more quickly, and manage their WAN more flexibly and in real-time by using public internet connectivity. The promise of SD WAN is that it complements the existing high-quality but high-cost MPLS connection with a high bandwidth, lower cost public internet service.

By binding together different connections, whether they are public internet, dedicated link, or even 3G/4G mobile, it’s possible to cost effectively increase the capacity of the WAN. One of the key benefits is that non-critical traffic can be sent over the internet in an encrypted tunnel, freeing up capacity on the more resilient (and expensive) MPLS connection for mission critical traffic.

Associated developments in Network Functions Virtualization (NFV) have also helped reduce the management headache. In the past, for each functional component the customer required, there may well have been a separate CPE appliance: a router from vendor A, a firewall from vendor B and a load balancer from vendor C, for example.

Of course, these were all proprietary devices that didn’t talk to each other. Now a general purpose x86 server at the customer site is capable of running all network component functionality as a software application - a routing application, a firewall and a load balancer all running on this same server.

This saves on hardware maintenance costs, but more importantly unlocks the benefits of much deeper automation and integration, giving network managers the opportunity for portal-based changing of firewall rules, modifying load balancing rules, or changing settings in real time.

Cloud adoption driving interconnection

With a mixture of assets now residing in public clouds and private data centres, there is an increasing need for these cloud sites to connect to each other. Often, organisations will want to get connected to specific SaaS applications or x-as-a-service, such as:

  • Infrastructure-as-a-Service (IaaS)
  • Platform-as-a-Service (PaaS)
  • Software-as-a-Service (SaaS)

This is where dedicated connectivity comes into its own. Network managers will need to connect enterprise data centers, headquarters to regional centers and branch offices and deliver private access to public clouds and other available S/I/P-aaS providers.

There may also be requirements to privately connect to apps and services such as video conferencing, unified communications and service desks.

Due to the nature of the traffic moving between data centres and/or public clouds, a dedicated, low latency, high bandwidth connection is necessary. Data Centre Interconnect is the fabric that connects and protects traffic across and between multiple data centres.

Meanwhile, data centre federation helps content owners manage a distributed topology and serve their applications and content closer to consumers and delivery networks, ensuring consumers and enterprises have highly available, secure access to content, data and services.

Find out how to overcome the three most common multi-cloud challenges in our e-book.

Hybrid model is here to stay

The arrival of Software Defined Interconnection® tools, such as PCCW Global’s Console Connect, give businesses increased agility, thanks to increased automation and portal-based consumption, while making higher bandwidths affordable not just in key business areas, but also at more remote sites.

The networking GuideeditThis is where Console Connect comes into its own, allowing your network interconnectivity to match the agility of a cloud network or cloud servers by scaling up and down and working around various different workloads with what is your own closed-off and dedicated MPLS network.

MPLS still has a critical role here however, there is plenty of indication that while organisations prefer to forward traffic from secure enterprise web applications over the internet, big data applications, storage replication traffic, and enterprise resource planning (ERP) applications are still favoured for the reliable MPLS network.

Then of course, for secure, high performance connectivity to the cloud and for connecting multiple private and public data centres together, dedicated connections are a key part of the arsenal. Ultimately, these best-of-breed components are all essential for different reasons.

Putting them together can really simplify and complete the global enterprise network, improving performance and efficiency for the business.

The beauty of Console Connect is that you are running this connectivity on an MPLS network and what you're getting is the vast majority of advantages of a dedicated MPLS network, but with more agile capabilities.

This dedicated connectivity is available on demand, delivering automated interconnection provisioning and routing, removing the complexity of configuration so network professionals can focus on their core business instead of managing the network.

Console Connect delivers a simple-to-deploy, flexible and affordable way to connect to cloud-based applications, partners, IT infrastructure and the world’s major cloud hosting services.

The global reach of the network spans over 50 countries and interconnects over 500 data centres, leveraging the worldwide PCCW Global MPLS network, which is physically separate to the public internet and features an uncontended, highly resilient and redundant core network with multiple low-latency paths between countries.

The remote workforce

Although most enterprises would more than likely have a number of road warriors or some remote employees, enterprise networks were typically built to support the lion’s share of traffic at known hotspots - offices or site locations.

When the pandemic hit in 2020, a shift that saw a majority of the workforce suddenly working remotely means that once small demographic suddenly becomes a bigger presence on the network.

blog remote working resized-1As more employees use a VPN to get on the access network, more traffic is pushed to the network edge. Secondly, as more users access the network via the edge, more traffic is also being backhauled across the corporate network.

In other words, the majority of network traffic is now originating from outside of the office, and needs to travel to the enterprise data centre then back out to the edge again. Not only does that increase demand for edge accessibility - where more capacity is now required - but it also increases the load on the VPN concentrators.

Console Connect allows you to self-provision redundant network links - this could be for a day, week, or even several months. You can fire up these connections when you need additional bandwidth, for network maintenance or application updates for example, and tear the circuit down when done.

With spikes of activity due to new working patterns from employees you can dynamically scale the network bandwidth in order to accommodate. Establishing direct and private connections to your cloud provider can also help alleviate some of the new demands on your corporate network caused by remote working.

Using Console Connect, businesses can directly connect to major cloud platforms, including Amazon Web Services, Microsoft Azure, Google Cloud and IBM Cloud, from any of a growing number of data centres located in 50+ countries around the world.

A single platform such as Console Connect can provision these connections with high levels of automation, enabling businesses to string together an ecosystem of cloud-centric data centres with direct access to the cloud.

Furthermore, the simplified user experience offers real-time visibility into network performance across the entire network ecosystem. This means you can continually adapt and optimise the network connectivity with granular and real-time control over bandwidth to meet changing needs on an intuitive and intelligent network.

Don’t forget to share this post!

Sign up for our latest blog updates direct to your inbox

Subscribe