What is Cloud Connect?

IP connect, cloud connect, direct connect, dedicated interconnect, direct link, fast connect …... What does it all mean?!?

HomeGuidesWhat is Cloud Connect?

Market Pulse

Join thousands of your peers from companies like IBM, Intel, General Motors and Sony today.

Introduction to cloud connectivity

Many Network Service Providers (NSP’s) have a range of options when it comes to cloud connectivity, though a lack of industry standards and confusing terminology can make things difficult to understand.

Do you know the difference between IP connect, cloud connect, direct connect, dedicated interconnect, direct link and fast connect? Is there a difference?

We enlisted the help of our cloud architects and product managers to help you cut through the noise and avoid the confusion.

Market Pulse

Join thousands of your peers from companies like IBM, Intel, General Motors and Sony today.

The history of cloud connect

Not so long ago, the only option available to connect to a public Cloud Service Provider (CSP) was over the public internet. However, with the rapid shift to cloud computing, customers quickly began to demand more – better security, lower latency, higher throughputs and increased reliability.

CSP’s soon realised better end-to-end cloud performance wasn’t going to be possible using the public Internet. They also understood that they didn’t have the expertise or the infrastructure to manage interconnectivity between dozens of NSP’s and colocation racks in their own data centres.

CSP’s also quickly realised the answer was in the hundreds of carrier neutral data centres spread all over the world. Besides some companies already co-locating in these neutral data centers, most NSP’s were also already present at these locations, so CSP’s could extend their backbone connectivity to meet them there. This provided the potential for a direct physical link between the NSP network and the CSP network, bypassing the regular Internet and providing a pseudo-private network.

This interconnectivity, known as direct cloud connect / private connectivity, enabled direct, end-to-end fiber connectivity and brought with it a whole range of security, latency and performance improvements. In addition also cost efficiencies for customers moving high volumes of data from cloud environments to their locations.

Nowadays you will find that certain cloud connect offerings are also available in automated ways over digital infrastructure platforms to enable instant delivery to the cloud. These On Demand platforms offer a number of benefits including online ordering via a portal or API, real-time delivery of new services and bandwidth that can be scaled in minutes with flexible commercial options.

Today, cloud connectivity falls into two buckets, one that relies on the public Internet, and another that uses private, dedicated connectivity. Within these 2 buckets are typically 6 different connectivity options available.

We’ll walk you through 6 cloud connectivity options and explain the pros and cons of each, so that you can choose the most suitable cloud access solution for your needs.

Cloud connectivity using the public internet

Arguably the cheapest and easiest way to connect to the cloud is through your standard Internet connection over the public Internet, sometimes referred to as IP access or IP transit.

Using your public Internet access is easy to set up and versatile, as accessing the cloud is just one of the many use cases for a standard Internet access connection. It provides a cost-efficient access method where you don’t have specific performance needs and do not have to move high volumes of data from cloud environments to your location. These days you see that certain NSP’s have this offering also available in automated ways over digital infrastructure platforms which allows customers to benefit from real-time ordering, provisioning and bandwidth flexing.

However, accessing cloud applications via the public Internet can also result in performance inconsistencies and increased security risks. You can think of public Internet routes like a highway – they’re dynamic and shared which can result in congestion at times, and when the most direct link is not available, data is routed through the next best option, which you have no control over resulting in packet loss and increased latency (delays). Additionally, multiple hand-offs between ISPs creates instability in the connection and increased risk.

Essentially the more pops and routers involved in delivering your data to its final destination, the more points of potential failure and a wider surface area for security attacks. Despite this, the growth of cloud connectivity via public Internet (nowadays with automation capabilities) has shown no sign of slowing down. The public Internet remains by far the most common way to access the cloud.

2021 RESEARCH REPORT

The evolving path to cloud adoption

What is driving the next era of cloud? We surveyed 400 IT decision makers and C-level executives, across Europe and Asia. Get all the insights in this exclusive research.

Cloud connectivity using public internet and cloud prioritisation

Internet connectivity with cloud prioritisation enables you to dynamically reserve a portion of your normal Internet bandwidth for select cloud applications. Traffic prioritisation is effective for both incoming and outgoing traffic enabling a consistent, SLA-backed user experience specifically for your traffic to the cloud.

Cloud prioritisation is offered by NSP’s that have direct peering services with cloud providers, such as Microsoft. For example, Microsoft Azure Peering Services (MAPS for short) enables end-users direct access to Microsoft cloud services through certified network providers. Once in place, your cloud traffic stays completely on your providers network, bypassing the public Internet and avoiding any other intermediary Internet Service Providers (ISPs).

The service also enabling cloud prioritisation for Microsoft Teams, Office 365, Azure, or any other Microsoft SaaS application. It ensures traffic destined for these services takes the shortest possible path, ensuring the lowest possible latency.

Cloud prioritisation combines the benefits of optimised routing and direct peering infrastructure with traffic prioritisation over the last mile, between the customer router and provider edge.

* only available from some MAPS providers

Direct Ethernet cloud connect

Dedicated connectivity through Ethernet connectivity services is the fastest and safest route for cloud connectivity, and the first of the Internet-bypass solutions. Direct cloud connectivity provides the secure, high performance, end-to-end connectivity needed to run critical applications that can’t be rivalled when only using the Internet. It is the result of CSP’s like AWS, Microsoft, Google, Oracle and IBM working together with NSP’s to enhance end-to-end cloud connectivity and automation capabilities – without customer traffic touching the Internet. End-users are probably already familiar with the names of these CSP’s direct interconnect programs – like AWS Direct Connect, Microsoft ExpressRoute and Google Cloud Interconnect – that enable private end-to-end secure connectivity through a NSP towards the customer location.

Direct Ethernet connectivity to the cloud renders performance, and security problems obsolete. It helps customers to have reliable, low-latency, consistent and high throughput to the cloud. It’s provided by cloud on-ramps at neutral data centres where the public CSP’s are present. This connects your premises or facilities through a NSP to the cloud provider via a direct layer 2 link and nowadays also available in automated ways over digital infrastructure platforms to enable instant delivery to the cloud. These On Demand platforms offers a number of benefits including online ordering via a portal or API, real-time delivery of new services and bandwidth that can be scaled in minutes with flexible commercial options.

CSP’s typically charge data transfer fees – which are different when connecting to the Cloud through direct Ethernet connectivity vs. through the Internet, so direct connectivity can be particularly cost-effective if you are likely to be transporting large amounts of data out from your cloud environment (known as ‘egress’) towards your location. Below example for connecting towards AWS comparing a dedicated offering (AWS Direct Connect) vs. connecting through the Internet.

Direct Ethernet Connect
Supports all topologies (Premise to cloud, premise to multi-cloud and cloud to cloud) Only suitable for a single customer site (not multisite/WAN connectivity)
Bandwidth services upto 40Gbps available Requires a dedicated circuit
Bandwidth is fully dedicated and guaranteed end-to-end Customer to handle BGP peering
On demand delivery and scaling typically available By default a layer 2 service, some NSP’s provide managed router (L3)
End to end connectivity SLA with deterministic latency and performance
Very suited and cost efficient for higher data transfer - due to lower price per Gigabyte (egress) out billing vs through the Internet
Not subject to DDOS attacks as traffic bypasses the public Internet
2021 RESEARCH REPORT

The evolving path to cloud adoption

What is driving the next era of cloud? We surveyed 400 IT decision makers and C-level executives, across Europe and Asia. Get all the insights in this exclusive research.

Wave cloud connect

Together with the increased demand for cloud connect, the requirement for higher bandwidths is growing. Optical cloud connect (or Wavelengths or Layer1 connectivity) is mainly referring to the market of extreme high bandwidth connections to the Cloud. These services are delivered over Optical Layer 1 digital platforms and can deliver 10G or 100G connectivity services towards a Cloud Service Provider.

The Optical Wave services are known in the market for the end-to-end transparency on data transmission, for being fully managed and for the important features they offer such as end-to-end routing diagram with KMZ, zero frame loss & jitter and fixed latency.

High bandwidth of 10G and 100G Only dedicated Cloud port connection options
Customer-defined route or 'hard' diversity end-to-end with KMZ diagram Only available in Point-to-Point topology
Customer-dedicated bandwidth (fixed latency, zero frame loss & jitter) Not cost efficient for low bandwidth
Secure L1 transparent Optical connectivity
End to end connectivity SLA with deterministic latency and performance
Offering includes Diversity options and Encryption feature

MPLS IP VPN cloud connect

Integrating cloud connectivity into an IP-VPN (also known as IP-VPN cloud connect or MPLS-WAN technology) is a scalable and cost-effective way to access cloud services within a network.

MPLS IP-VPN provides direct, high bandwidth and secure cloud connectivity to CSP’s. It’s suited to customers that require secure access to the cloud across multiple sites and has traditionally been a common way for businesses connect to cloud providers.

The cloud connection is directly integrated into the IP-VPN, so that it is completely private, with no reliance on the Internet. The cloud locations are integrated into the private WAN and effectively seen as another site (or sites) on the IP-VPN, meaning there is no need to redesign large corporate networks. Different customer locations in the IP-VPN then share the connectivity to access their resources in the cloud.

Very suitable for integration in existing and new MPLS IP-VPN networks MPLS only, no Internet Branch sites
Highly secure, part of private IP-VPN Layer 3 connectivity
No need to redesign large corporate networks Dedicated connection required
Fully integrated in IP-VPN (any-to-any), avoids the need to backhaul traffic Can increase latency – depends on where branch sites are located
Cost-effective as multiple locations on the IP-VPN share the connectivity toward the cloud
Support different topologies: Single Cloud, Multi-Cloud and Cloud-to-Cloud

SD WAN cloud connect

SD WAN (sometimes called SDWAN, SD WAN Cloud Access or SD WAN Multi-Cloud) can connect your software-defined WAN infrastructure to multiple cloud service providers (such as AWS, Microsoft Azure and Google Cloud) to enable direct, high performance and secure multi-cloud connectivity. Each branch office benefits from seamless end-to-end connectivity to your public cloud providers.

For cost-effective, direct connectivity into multiple cloud environments, SD WAN is likely the optimal solution.

SD WAN offers sophisticated and comprehensive connectivity capabilities, with features including prioritisation, optimisation, security, analytics, automated provisioning and deployment. It brings together a single cohesive view of the enterprise network, tying together WAN sites, IaaS/SaaS cloud, and branch site connectivity, typically all within a single online portal. Coupled with on-demand capabilities such as zero touch site provisioning and real-time bandwidth upgrades, SD WAN is an extremely powerful solution.

Prior to SD WAN, traffic was typically backhauled to a central site or regional hub where a physical hardware stack provided functionality that was cost prohibitive to deploy at satellite sites (such as security and analytics). SD WAN now enables this functionality to be deployed in software on a common hardware platform. These software stacks comprise of various software functions that can be dynamically loaded and deployed in a modular fashion with a range of functionality, including:

  • Networking & routing
  • Analytics
  • Security
  • Traffic optimisation
  • Remote access
  • and more

By tying together WAN sites and cloud infrastructure SD WAN can deliver end-to-end security, performance and visibility.

Building on MPLS IP VPN above, SD WAN offers private connectivity into multiple cloud providers in a single solution, combined with end-to-end performance backed by a SLA, end-to end security, and end-to-end analytics.

The best way to manage multi-cloud infrastructures (MPLS and Internet branch sites) Can require significant network changes and redesign to leverage all the benefits
Completely avoids the need to backhaul traffic from a brand site to a CSP or data centre Newer services such as on demand capabilities may be limited
Bandwidth is fully dedicated and guaranteed end-to-end Check support for your specific cloud provider (CSP) requirements
Automatic provisioning and deployment Check support and roadmap for features and functionality such as such as application optimisation, analytics, SASE and more
Dynamic path selection - intelligent and dynamic routing to the best available path Can increase latency – depends on where branch sites are located
Additional security features like FW/NAT to support the CSP public domain
End-to-end visibility and management of the entire enterprise network
Supports all topologies - WAN to cloud, WAN to multi-cloud and cloud to Cloud
Supports also Internet only branch sites connecting directly to CSP through SD-WAN

Questions to ask your cloud connect provider

There is no ‘one-size-fits-all’ solution for enterprises as they connect to the cloud, here are some things to consider.

Top 10 questions and considerations to ensure you remain future-proofed by a new provider:

  1. What level of partnership do you have with the major cloud providers?
  2. How many public cloud points of presence do you have?
  3. How many data centres are currently connected to your network?
  4. How many offices are currently connected to your network?
  5. Do you provide on demand capabilities via a self-serve software portal?
  6. Are you data centre and cloud service provider neutral?
  7. Who owns your fibre network - is it privately owned or leased from a 3rd party?
  8. Do you provide end-to-end connectivity, including the last mile?
  9. Do you provide guaranteed SLAs including for latency, packet loss and throughput?
  10. What bandwidths are supported for cloud connectivity?

Cloud decision tree

Below is a high level Colt decision tree for cloud connectivity options for Colt's cloud connectivity portfolio offering:

2021 RESEARCH REPORT

The evolving path to cloud adoption

What is driving the next era of cloud? We surveyed 400 IT decision makers and C-level executives, across Europe and Asia. Get all the insights in this exclusive research.

Looking for some help or advice? click here to chat with our team or view our cloud connect products.

With thanks to - Stuart Brameld (Marketing Manager), Marc Heijnen (Product Marketing & Management, Cloud Connectivity Services), Mohit Manral (Product Manager, SD WAN), Yusaku Tanaka (Product Manager, IP Access).

Version 1.2, updated 8th March 2023

HomeGuidesWhat is Cloud Connect? | 10 min read

Cloud Connect Explained

IP connect, cloud connect, direct connect, dedicated interconnect, direct link, fast connect …... What does it all mean?!?

5 min read

Introduction to cloud connectivity

Many Network Service Providers (NSP’s) have a range of options when it comes to cloud connectivity, though a lack of industry standards and confusing terminology can make things difficult to understand.

Do you know the difference between IP connect, cloud connect, direct connect, dedicated interconnect, direct link and fast connect? Is there a difference?

We enlisted the help of our cloud architects and product managers to help you cut through the noise and avoid the confusion.

The history of cloud connect

Not so long ago, the only option available to connect to a public Cloud Service Provider (CSP) was over the public internet. However, with the rapid shift to cloud computing, customers quickly began to demand more – better security, lower latency, higher throughputs and increased reliability.

CSP’s soon realised better end-to-end cloud performance wasn’t going to be possible using the public Internet. They also understood that they didn’t have the expertise or the infrastructure to manage interconnectivity between dozens of NSP’s and colocation racks in their own data centres.

CSP’s also quickly realised the answer was in the hundreds of carrier neutral data centres spread all over the world. Besides some companies already co-locating in these neutral data centers, most NSP’s were also already present at these locations, so CSP’s could extend their backbone connectivity to meet them there. This provided the potential for a direct physical link between the NSP network and the CSP network, bypassing the regular Internet and providing a pseudo-private network.

This interconnectivity, known as direct cloud connect / private connectivity, enabled direct, end-to-end fiber connectivity and brought with it a whole range of security, latency and performance improvements. In addition also cost efficiencies for customers moving high volumes of data from cloud environments to their locations.

Nowadays you will find that certain cloud connect offerings are also available in automated ways over digital infrastructure platforms to enable instant delivery to the cloud. These On Demand platforms offer a number of benefits including online ordering via a portal or API, real-time delivery of new services and bandwidth that can be scaled in minutes with flexible commercial options.

Today, cloud connectivity falls into two buckets, one that relies on the public Internet, and another that uses private, dedicated connectivity. Within these 2 buckets are typically 6 different connectivity options available.

Public Internet Ethernet
Public Internet with cloud prioritisation Optical (Wavelengths)
MPLS IP VPN
SD WAN

We’ll walk you through 6 cloud connectivity options and explain the pros and cons of each, so that you can choose the most suitable cloud access solution for your needs.

Cloud connectivity using the public internet

Arguably the cheapest and easiest way to connect to the cloud is through your standard Internet connection over the public Internet, sometimes referred to as IP access or IP transit.

Using your public Internet access is easy to set up and versatile, as accessing the cloud is just one of the many use cases for a standard Internet access connection. It provides a cost-efficient access method where you don’t have specific performance needs and do not have to move high volumes of data from cloud environments to your location. These days you see that certain NSP’s have this offering also available in automated ways over digital infrastructure platforms which allows customers to benefit from real-time ordering, provisioning and bandwidth flexing.

However, accessing cloud applications via the public Internet can also result in performance inconsistencies and increased security risks. You can think of public Internet routes like a highway – they’re dynamic and shared which can result in congestion at times, and when the most direct link is not available, data is routed through the next best option, which you have no control over resulting in packet loss and increased latency (delays). Additionally, multiple hand-offs between ISPs creates instability in the connection and increased risk.

Essentially the more pops and routers involved in delivering your data to its final destination, the more points of potential failure and a wider surface area for security attacks. Despite this, the growth of cloud connectivity via public Internet (nowadays with automation capabilities) has shown no sign of slowing down. The public Internet remains by far the most common way to access the cloud.

Best for single locations A best-effort service not suited for critical applications
Cost-effective for low and medium data transfer volumes Shared and dynamic routes means no performance optimisation or guaranteed performance
Suitable for most topologies (premise/wan to single cloud, premise/wan to multi-cloud) Not suitable for cloud to cloud connectivity
Use your existing business-as-usual internet connection Becomes expensive for higher data transfer rates due to per Gigabyte out billing (egress)
Easy to get up and running, no need for a dedicated circuit Exposed to security risks, such as DoS and DDoS attacks against routers and links
On demand delivery and scaling typically available The least secure connectivity option
Multiple ISPs results in more potential points of failure
2021 RESEARCH REPORT

The evolving path to cloud adoption

What is driving the next era of cloud? We surveyed 400 IT decision makers and C-level executives, across Europe and Asia. Get all the insights in this exclusive research.

Cloud connectivity using public internet and cloud prioritisation

Internet connectivity with cloud prioritisation enables you to dynamically reserve a portion of your normal Internet bandwidth for select cloud applications. Traffic prioritisation is effective for both incoming and outgoing traffic enabling a consistent, SLA-backed user experience specifically for your traffic to the cloud.

Cloud prioritisation is offered by NSP’s that have direct peering services with cloud providers, such as Microsoft. For example, Microsoft Azure Peering Services (MAPS for short) enables end-users direct access to Microsoft cloud services through certified network providers. Once in place, your cloud traffic stays completely on your providers network, bypassing the public Internet and avoiding any other intermediary Internet Service Providers (ISPs).

The service also enabling cloud prioritisation for Microsoft Teams, Office 365, Azure, or any other Microsoft SaaS application. It ensures traffic destined for these services takes the shortest possible path, ensuring the lowest possible latency.

Cloud prioritisation combines the benefits of optimised routing and direct peering infrastructure with traffic prioritisation over the last mile, between the customer router and provider edge.

An add-on to standard Internet access services Offerings are dependent on your connectivity and cloud providers
Consistent and guaranteed SLA-backed performance to the closest peering point Layer 3 access only
Dynamically reserved bandwidth for cloud applications No dedicated connection
Optimised routing selects the shortest path to the cloud network edge
Avoids network contention and unpredictable routing changes
30 millisecond Round Trip Delay (RTD)
Traffic congestion control *

What is Network Function Virtualisation?

Network Function Virtualisation is a new way to add, distribute and run networking services. It takes away Physical Network Functions from their dedicated hardware devices, so that they can run on standardised hardware – think as if they were apps on the Google Play Store, all made by different people, running on the same device. These functions, such as the previously mentioned firewall, or intrusion prevention, become Virtual Network Functions (VNFs).

uCPE uses these Virtual Network Functions to consolidate loads of specialised devices into one general purpose box. It’s the next step in the evolution of the smart network, and it puts way more control than ever before in the hands of the customer.

uCPE Smartphone Moment

CPE´s are having a smartphone moment

The introduction of uCPE has done for network functions what smartphones did for us in our day to day lives. In works (roughly) in three steps:

Hardware services get converted into software applications (VNFs) – So similar to having an app for a streaming music, we have a VNF for a Firewall

These are a part of a vendor-agnostic platform where the apps can run, which is known as the Virtualisation layer (much like the Google Play Store)

This enables these apps to run on the Virtualisation layer simultaneously – similar to Spotify running in the background while you order an Uber on the same phone, a business can run their chosen router and smartphone platforms in a virtualised manner, on shared hardware

uCPE Smartphone Moment 2

What are the key benefits of uCPE?

Moving to virtualisation is a key part of organisation’s digital transformation, and gives a massive range of benefits, including:

Reducing capital expenditure by reducing the need to buy purpose-built, vendor specific hardware

Reducing operational expenditure through decreased running costs (less space needed to house all the equipment, less power needed to run them etc.)

Saving time on long procurement processes with many different vendors

Lowering the risk of rolling out new services by allowing providers to trial and roll back services, as the customer needs them

Negate the need for engineer site visits through Zero Touch Provisioning, where the uCPE auto-installs and configures itself at power-on and configuration updates can be made remotely.

The impact of uCPE

Today people expect a lot from their network, both in terms of performance and bandwidth required, but also in terms of flexibility and responsiveness. Cloud computing ushered in an era of virtualisation for enterprise IT and Local Area Network (LAN) infrastructure, bringing in a major change in how IT services were deployed and used. But this was, at the time, barely felt in the Wide Area Network (WAN) world, until the deployment of SD WAN, and even this was largely contained.

If virtualisation is being introduced for enterprises in the enterprise IT and LAN infrastructure, why not for the WAN?

UCPE is bringing the power of the cloud to the traditional telecommunications network. With lower costs, rollout time and overall maintenance, uCPE brings with it easier operational improvement, experimentation and innovation. The additional software-defined approach also brings easier WAN optimisation, providing greater visibility into data & encryption analytics and application usage, making it quicker and easier for businesses to optimise their WAN.

uCPE will evolve to Edge

Colt uCPE already offers flexibility and agility, with an expanding list of VNF´s available to customers.  Similar to  Google Play, network service providers are developing their own version of “Play Store” or VNF library so that customers can choose, buy, download and update functions at will.

uCPE is a crucial step in the transition to Multi Access Edge Computing. Network service providers have started carving out space (which includes compute, storage and memory) for customers to run their own applications. Offering a ¨micro-cloud¨ at their premises,  where businesses will be able to run whatever application they need. For example, if a customer wants to run a security monitory solution, instead of running it in a proprietary hardware, they can run in this Edge device.

Find out more about uCPE and how it can help your business.

Key takeaways of uCPE

Reducing upfront & running costs

When using Universal Customer Premises Equipment, the same server can be used for multiple network functions. This cuts down initial capital expenditure and overall operation expenditure long term.

A catalyst for innovation & services on-demand

uCPE brings the power of the cloud to the telco network and having an open, programable platform drives innovation. As network functions are software-based (rather than hardware based) it is easier to initiate new functions. Such a software-centric uCPE solution means services can be turned up on-demand.

Automate and simplify operations

Standardized protocols in the data, control, and management-plane in uCPE can streamline and simplify network integration and operation and drive automation

Software-defined networking & infrastructure

Software-defined Wide Area Networks (SD-WAN) belong alongside uCPE. Providers can deploy their virtualised services on a low-cost platform that enables the deployment of a wide variety of VNFs. Customers will be able to scale bandwidth, speeds and additional applications as they see fit, on demand.

No dedicated appliances for WAN services

Taking advantage of virtualisation, the uCPE platform avoids the need for dedicated appliances for WAN services such as SD WAN, firewall, WAN optimisers etc., replacing them with equivalent software-based VNFs. Combined with orchestration capabilities, the uCPE platform provides software-based dynamic control, allowing Service Providers to deliver on demand WAN services

Key terms demystified

Network Function Virtualization (NFV)

NFV is the generic term used for the process of separating network functions from dedicated hardware appliances so that they may run as software on standardised hardware. It is the initiative to convert hardware-based network functions into software application These application are VNF (see below)

Virtual Network Functions (VNF)

VNFs are virtualized tasks formerly carried out by proprietary, dedicated hardware. VNFs move individual network functions such as firewall, SD WAN  out of dedicated hardware devices into software that runs on commodity hardware.

Universal Customer Premise Equipment (uCPE)

uCPE is a general purpose platform that integrates compute, storage and networking on a commodity, off-the-shelf server, allowing it to provide multiple VNF at the at the customer locations. VNFs run on uCPE.

Network Functions Virtualization Infrastructure (NFVI)

NFVI encompasses all the different hardware and software components needed to enable and support Virtual Network Functions. This would include operating systems, servers, hypervisors and  any other physical or virtual assets that form the platform for supporting NFV and hosting VNFs. NFVI could be at any location in the network:  at the customer premise, in the edge,  or in the network core.

Edge computing

Edge computing optimises internet devices and web applications by bringing computing power closer to the source of the original data. This minimises the need for long distance communications between a client and a server, which reduces latency and overall bandwidth needed

SD WAN

SD-WAN is a software-defined approach to managing the wide-area network, or WAN. Through a centralised interface a cloud-delivered SD-WAN architecture allows companies to scale their services to meet their own specifications. Find out more about SD WAN here.

Zero Touch Provisioning

Zero-Touch Provisioning is an advantage of software-defined applications, which means that devices can be provisioned and configured automatically, without the need for someone to come and install it manually.

Our uCPE solution

We have partnered with other market-leading vendors to roll-out our uCPE solution globally. Colt uCPE solution consists of a generic hardware platform, a vendor agnostic virtualization layer and host of different virtual applications or VNFs.

VNFs for SD-WAN and Firewall services are already available. Our uCPE technology portfolio is continuing to develop, with VNFs for new network services and vendors added frequently. Please contact your Colt account executive for the latest catalogue of applications supported by Colt uCPE.

Find out more about what our uCPE solution can do for you and your business.