"We cannot afford to be narrow minded about AI infrastructure. As industry leaders, we must embrace a broader, bolder vision that goes far beyond connectivity.

Enabling true AI readiness for enterprises also demands security, sovereignty, capacity, scalability, low-latency, capillarity, responsibility and simplicity."
Keri Gilder
Colt Technology Services

Executive Summary

Reinvention is driving our industry forward as traditional telecommunications fall short of the expanding digital capabilities demanded by our customers in the age of AI. Basic connectivity and standalone network services are no longer fit for purpose; enterprises seek greater value from their technology providers as they transform their organisations for the Inference Age. As an industry of digital infrastructure leaders, we set the standard for truly AI-ready infrastructure and have a significant opportunity to transform how enterprise customers connect to the technology that enables their AI.

why this matters now
  • Enterprise expectations have shifted: 80% of B2B telecoms customers believe we have a “right-to-play” beyond connectivity;1
  • The Inference Age has arrived: One in five global firms already spend US$750,000+ annually on AI,2 and inference workloads will dominate by 2030;3
  • Risk of staying narrow: Core connectivity spend is forecast to grow only 3.2%, while the growth of adjacent services exceeds 6%.4 Without reinvention, telcos risk losing ground to neocloud providers, hyperscalers and other fast-moving incumbents.

Enabling AI can be seen as the first opportunity to flex our broader, more integrated and innovative capabilities as intelligent digital infrastructure leaders. More than an “AI-ready” rebrand of our existing connectivity and network services, we set the standard for digital infrastructure for the Inference Age. Security, sovereignty, capacity, latency, scalability, capillarity, responsibility and simplicity are eight key pillars that form Colt’s leadership framework for the Inference Age.

Caractéristiques

eight imperatives we must deliver

1. Security

Redefine security for the Inference Age by embedding modular defences such as zero trust and AI-driven threat intelligence into every layer of your network – ensuring security and resilience scale as fast as innovation.

2. Sovereignty

Build sovereign AI ecosystems that guarantee compliance and trust across fragmented jurisdictions – working with regional partners to enable agility and reach while keeping data under local control.

3. Capacity

Deliver the bandwidth and compute muscle AI demands without inflating cost or carbon emissions – leveraging an intent-driven backbone, subsea capacity and GPU-centric architectures.

4. Scalability

Make scaling infrastructure instant and effortless through automation and consumption-based models – enabling enterprises to innovate without procurement delays or overprovisioning.

5. Latency

Own the edge and leverage an intent-driven backbone to engineer sub-five millisecond latency for real-time applications.

6. Capillarity

Expand network reach and build globally distributed edge infrastructure to position AI workloads closer to users - ensuring low latency, compliance and resilience for real-time innovation anywhere.

7. Responsability

Set the standard for responsible AI by driving a people-first strategy and embedding fairness, transparency and sustainability into AI and infrastructure design and deployment.

8. Simplicity

Create the "easy-button" for AI-ready infrastructure with intelligent platforms and as-a-service models - removing complexity and making innovation effortless.

Enterprises’ AI initiatives will fail without modern digital infrastructure.
As an industry, a broader vision encompassing security, sovereignty, capacity, scalability, latency, capillarity, responsibility and simplicity is the key to claiming our stake as pioneers of new-age technology and digital infrastructure.

introduction

Telecommunications have been the backbone of global connectivity for decades. As an industry, our networks and infrastructure connect almost six billion people to the Internet, three billion devices to 5G, and more than 470 million businesses online, generating US$1.5 trillion in annual revenue. However, what the world needs from us is changing; traditional telecoms offerings are no longer fit for purpose. Businesses face mounting pressure to generate positive outcomes from their technology providers and partnerships. The rapid acceleration of technology and AI, and the dawn of the Inference Age, have redefined customers’ expectations of our capabilities and offerings.

Connectivity is no longer enough, so reinvention is driving our industry forward.

Those of us focused on the future have set a major transformation in motion, relinquishing the legacy of traditional telecoms to stride forward as intelligent digital infrastructure leaders. This transformation is not just a marketing exercise; it is a fundamental reinvention of business models, operations, capabilities and culture that evolves with the needs of our customers in the age of AI. It reflects the convergence of telecoms and technology as organisations strive to unlock AI’s full potential.

Compared with traditional telcos, future-focused telcos are more likely to work with partners to deliver on the customer promise; build a customer-centric organisation and culture; create intelligent and agile services, technologies and platforms; and design seamless, intentional experiences for customers, employees and partners, according to KPMG.5 We are solutions-focused, offering more than connectivity to meet equally critical demands for security, sovereignty, capacity, low-latency, scalability, capillarity, responsibility and simplicity.

Future-focused telcos vs tranditional telcos
  1. 3.1x more likely to engage, integrate and manage third parties to help increase speed-to-market, reduce costs, mitigate risk, and close capability gaps to deliver on the customer promise
  2. 2.6x more likely to build a customer-centric organisation and culture that inspires people to deliver on the customer promise and drive up business performance
  3. 2.3x more likely to create intelligent and agile services, technologies and platforms, enabling the customer agenda with solutions that are secure, scalable and cost-effective
  4. 2.1x more likely to design seamless, intentional experience for customers, employees and partners to support customer value proposiitons and deliver business objectives.
Why now?

Enterprises are no longer satisfied with the industry’s provisioning of basic, high-bandwidth connectivity or standalone network services as they face mounting pressure to maximise value from their technology providers and partnerships. According to McKinsey, almost 80% of B2B telecoms customers affirm that telcos have a “right-to-play” beyond traditional connectivity.6 As they modernise and transform their own businesses for the AI era, organisations seek comprehensive solutions that integrate seamlessly with their operations, deliver advanced automation, and enable data-driven decision-making. They want partners who can help them achieve positive outcomes such as navigating complexity, accelerating innovation and unlocking AI's full potential.

31% of European technology leaders say their tech partners and other providers do not currently offer what they need

Transformation into intelligent digital infrastructure leaders enables us to capture the breadth of enterprises’ technology needs. Importantly, it addresses the reality that many IT professionals struggle with our industry’s current offerings; a 2025 IDC survey revealed that 31% of European technology leaders say their tech partners and other providers do not currently offer what they need.7 Colt research found that a large proportion of CIOs are also re-evaluating their suppliers due to the demands of AI.8 Many providers remain anchored in legacy models which are focused on infrastructure and network operations, with limited emphasis on evolving enterprise needs or customer experience. A KPMG report revealed a 3% decline in customer experience scores across the industry in 2023 compared to the previous year.9

These results are a warning that traditional approaches are not fit for purpose in the new digital and AI era; reinvention is crucial for us to adapt and thrive in an evolving and increasingly competitive global market.

What it takes

Reinvention as something bigger, bolder and better requires us to embrace a broader vision.

Compared to telcos that are bogged down by legacy, intelligent digital infrastructure leaders prioritise agility, digital transformation and innovation to create value for customers in new ways. We build new offerings with customer AI propositions in mind, meaning that our solutions are secure, sovereign, high-capacity, low-latency, scalable, responsible and simple by design. We drive modernisation across our network and operational architectures, overhauling general purpose setups in favour of flexible, scalable environments that are optimised for diverse AI workloads and more expansive enterprise applications and use cases.

The AI Opportunity: enabling AI readiness

Our AI Opportunity

Enabling AI can be seen as the first opportunity to flex our broader, more integrated and innovative capabilities as intelligent digital infrastructure leaders. Currently, one in five global firms are spending US$750,000 or more annually on AI, prioritising AI-driven innovation and product development as well as generative AI for content development, according to Colt research.10

By 2030 however, McKinsey expects AI inference to account for a majority of AI workloads.11 The next 12 months will see AI inferencing reach the next stage of maturity, shifting from experimentation to integration into the enterprise IT environment where it will be used to extract insight, make predictions and enable smarter, context-aware decisions in real-time, Colt predicts.12 As an industry, our opportunity lies in capturing the breadth of enterprise IT needs for the Inference Age.

"Enabling AI can be seen as the first major opportunity to flex our broader, more integrated and innovative capabilities as intelligent digital infrastructure leaders."
Where some fall short

Some of our industry adopted the mindset early that moving quickly was the key to claiming the AI prize.13 Swept up in an AI arm’s race, they moved with haste to get ahead of the curve but lost sight of the prospects of reinvention as intelligent digital infrastructure companies. Too many providers retracted to narrower visions and offerings, simply serving customers with repackaged, rebranded versions of existing connectivity and digital infrastructure. Moving beyond connectivity, we must also deliver security, sovereignty, capacity, low-latency, scalability, capillarity, responsibility and simplicity for enterprises integrating AI. With many CIOs feeling challenged by our current offerings and re-evaluating their suppliers, our industry is clearly yet to capture the full breadth of enterprise IT needs for the Inference Age.

“Moving beyond connectivity, we must also deliver security, sovereignty, capacity, low-latency, scalability, capillarity, responsibility and simplicity for enterprises integrating AI.”

We cannot afford to be narrow-minded about AI infrastructure. As industry leaders, we must embrace a broader, bolder vision that goes far beyond connectivity. Enabling true AI readiness for enterprises also demands security, sovereignty, capacity, scalability low-latency, capillarity, responsibility and simplicity. Embracing this broader vision, enabling AI should be about creating value for customers, being agile and innovative and providing seamless digital experiences. It has never been more important to realign with the needs of our customers whose IT demands are varied, dynamic and increasingly complex in the Inference Age. Those of us who focus on architecting solutions that support customer value propositions and deliver businesses’ AI objectives are the ones who will race ahead.

beyond connectivity: eight pillars for ai-ready infrastructure

Core connectivity remains critical to enterprise infrastructure, but it is insufficient on its own to drive business and serve AI-driven enterprises in the Inference Age. McKinsey expects only 3.2% growth in core connectivity spend over the next 12 months, while growth expectations for spending on telecom-related areas beyond the core surpass 6%.14 To meet customers’ broader ambitions and set the standard for AI-ready digital infrastructure, we must all move beyond basic connectivity offerings and embrace eight pillars: security, sovereignty, capacity, latency, scalability, capillarity, responsibility and simplicity. These pillars form Colt’s leadership framework for the Inference Age.

1. security

Cybersecurity is the primary telco and tech need among B2B organisations as AI growth drives data to be more distributed across networks, cloud environments and edge devices.15 Emerging risk and advanced threat capabilities demand us to prioritise security alongside innovation and agility; next-generation solutions are inherently resilient and secure by design. To be AI-ready, enterprise infrastructure must enable the secure exchange of proprietary datasets for AI workloads and proactively safeguard critical systems against increasingly sophisticated threats. We also have an opportunity to enable and deploy AI itself as a defence mechanism.

case study snippet

The U.S. Army Cyber Command developed Panoptic Junction (PJ), an AI platform that detects malicious traffic and anomalous activity across complex networks. Using advanced ML, PJ continuously analyses behaviour to spot subtle signs of emerging threats. After a successful prototype showing high detection accuracy, PJ is entering a 12‑month U.S. Cyber Command pilot—marking a major step in AI-enabled defence and enabling the Department of Defense to anticipate threats, protect critical infrastructure, and speed response through real-time anomaly detection.

Modern enterprise networks integrate advanced, modular defences – such as zero trust WAN segments, hybrid mesh firewalls and unified AI gateways – directly into the network. These controls protect enterprises against conventional and AI-specific threats such as prompt injection and agent-based attacks. Zero trust architectures should soon become the minimum acceptable security standard, while we focus on integrating solutions that pre-empt and actively block threats before they can materialise. Gartner® forecasts that “pre-emptive cybersecurity solutions will account for 50% of IT security spending by 2030, up from less than 5% in 2024, replacing standalone detection and response (DR) solutions as the preferred approach to defend against cyberthreats.”16

It also expects that “By 2029, technology products lacking pre-emptive cybersecurity will lose market relevance as buyers prioritise proactive defense over traditional detection and response.”17 Colt is considering how to add more proactive solutions to its robust security portfolio which spans managed firewall with IP VPN, Advanced Threat Protection (ATP) and Intrusion Detection and Prevention (IDP); DDoS mitigation; and a Secure Web Gateway to deliver security-as-a-service.

"Zero Trust architectures should soon become the minimum acceptable security standard, while we focus on integrating solutions that pre-empt and actively block threats before they can materialise."

Edge-based solutions must also be prioritised to deliver security and data privacy. Crucially, they enable sensitive enterprise data to bypass centralised servers for processing which reduces exposure to cyberthreats as data remains closer to its source and under local control. Integrating capabilities such as zero trust, AI-driven threat intelligence and pre-emptive cybersecurity into edge infrastructure will provide enterprises the resilience and agility needed to innovate securely and confidently in the AI era.

Beyond AI: Quantum reshaping the digital trust landscape

AI is not the only force pressuring enterprise IT leaders to protect their data and digital infrastructure. Attention and investment are also turning to quantum security as CIOs develop a deeper understanding of quantum’s power and potential. It is critical that we consider quantum, and its interplay with AI, to serve the full breadth of enterprise IT security needs. Forrester forecasts that quantum security spending will exceed 5% of enterprises’ overall IT security budgets in 2026,18 while a report from The Quantum Insider estimates the quantum security market to grow at over 50% CAGR to 2030, reaching US$10 billion.19 With traditional data cryptography methods at risk of being deciphered by quantum computers, latest estimates suggest that the point at when this happens – known as Q Day – could come as soon as 2030.20

Technologies such as post-quantum cryptography (PQC) and quantum key distribution (QKD) protect traffic from this risk as it travels across a network. In 2025, Colt and technology partners successfully trialled quantum-secured encryption across its optical wave network. Our industry must lead further trials, development and innovation to protect data from quantum and AI risk, as we remain committed to delivering solutions for the broad spectrum of enterprise IT needs.

2. Sovereignty

Enterprise IT leaders are navigating an increasingly fragmented regulatory landscape as they build and deploy AI systems using their own data, infrastructure, people and policies. According to Gartner, “By 2027, fragmented AI regulation will grow to cover 50% of the world’s economies, driving US$5 billion in compliance investment.”21

It also expects that “By 2027, 35% of countries will be locked into region-specific AI platforms using proprietary contextual data."22

Meanwhile, regulatory frameworks such as the EU AI Act are evolving to keep pace with the technology, and businesses face mounting pressure to meet stricter compliance standards while remaining agile and competitive.

Without regulatory knowledge and effective governance, enterprises risk fragmented operations and limited access to locally governed AI services. As a result, we must prioritise digital sovereignty as a key pillar of AI-ready infrastructure. Sovereignty, in this context, refers to the authority and control an organisation or nation exercises over its AI data, infrastructure and operations, ensuring they comply with local laws and regulations while maintaining independence from external influence.

Geopolitical uncertainty is amplifying the pressure on governments and IT leaders to establish sovereign AI stacks. As per a Gartner report, “By 2029, committed countries will need to spend at least 1% of their GDP on AI infrastructure.”23 Enterprises are also taking responsibility into their own hands through geopatriation – an emerging phenomenon where companies strategically relocate their data and applications from global public clouds to sovereign or regional cloud providers, or even on-premises data centres, to mitigate geopolitical risk. Gartner estimates that, “By 2030, more than 75% of European and Middle Eastern enterprises will geopatriate their virtual workloads into solutions that are designed to reduce geopolitical risk, up from less than 5% in 2025.”24

By 2030, more than 75% of European and Middle Eastern enterprises will geopatriate their virtual workloads...to reduce geopolitical risk."
True AI readiness requires us to guarantee data sovereignty by embedding controls that govern where and how data moves across multi-cloud and hybrid environments.

Data must be routed in alignment with organisational and jurisdictional boundaries, without compromising performance. Intent-driven geo-routing and zero trust WAN segmentation enable organisations to comply with regional data residency requirements and maintain control over sensitive information. Through quantum-safe communication, traceability and auditability, we can achieve tamper-resistant data flows which ensures that data in motion remains unaltered and fully compliant. We must integrate observability, auditability and reporting to provide transparency and traceability. As we serve global enterprises and organisations operating across multiple jurisdictions, it is also important to build strong relationships with local cloud providers, large language model (LLM) vendors and leaders in sovereign-by-design AI stacks. Enabling compliant, resilient and sovereign AI ecosystems builds trust with customers and enables us to unlock growth in regulated markets.

3. capacity

AI-powered innovation is driving demand for robust, on-demand network capacity. Enterprises across industries are piloting AI to personalise customer experiences, enable real-time decision making and accelerate product development through large-scale simulations and high-fidelity modelling run by AI.

In 2025, 58% of the 1,500 CIOs questioned in Colt research added more capacity to their network due to growing AI demands.25 More scalable bandwidth is critical to support proliferating and unpredictable demands of enterprise AI workloads while controlling the cost and power consumption of our supporting infrastructure.

Strategic bandwidth allocation is imperative. Networks designed for AI must leverage a deterministic, intent-driven backbone that can dynamically allocate bandwidth and resources where they are needed most – such as data ingestion, model training, inference and other intensive AI applications. This elastic approach ensures a high throughput of 10G to 100G and beyond, supporting exponential growth for traffic across clouds, data centres and remote sites. Importantly, it also mitigates the need for costly overprovisioning and limits carbon impact.

To sustain global AI scale, our industry must utilise subsea cable capacity and the computational power of GPU-centric architectures. Subsea systems can deliver multi-terabit throughput and low latency to move AI training data and synchronise inference across continental data centre hotspots. AI workloads transmitted over transatlantic cables are projected to surge from just 8% of total capacity in 2025 to 30% by 2035,26 which underscores the importance of subsea infrastructure.

Meanwhile, GPUs offer massive parallel processing capabilities, enabling faster model training, lower latency inference and improved scalability for enterprise applications. The transition from CPU-based systems – which cannot keep pace with the computational intensity of model training and other advanced AI workloads – requires optimised interconnects, high-bandwidth memory and software frameworks that fully leverage GPU acceleration. The adoption of GPU-based systems will enable enterprises to unlock the performance needed to deliver real-time insights and support agentic and inference AI at scale. Colt and technology partners are leading innovative trials to pioneer technologies that unlock greater capacity and better performance without increasing energy consumption or carbon emissions.

CASE STUDY SNIPPET
Transatlantic terabit network delivers ai content seamlessly
To support content providers now consuming 74% of global used international bandwidth, Colt and Ciena launched a new transatlantic and terrestrial terabit network in October 2025. Powered by Ciena’s WaveLogic 6 Extreme (WL6e) transponder, the upgrade boosts single‑fibre capacity by 20% and cuts space, power use and emissions by 50% versus previous models. With WL6e, Colt’s transatlantic capacity per wave rises from 450 Gbps to 1.2 Tbps—a 140% increase over ~6,500 km—expanding support from roughly 18,000 to 44,000 simultaneous cloud‑gaming sessions. Terrestrial capacity between Lisbon and Madrid also jumps 140%, from 600 Gbps to 1.5 Tbps. Colt can now generate up to 1.5× the capacity of traditional C‑band spectrum. This expanded network enables hyperscalers to meet growing demand from AI content, gaming and streaming, while significantly reducing power use and carbon emissions.

4. Scalability

Businesses that are experimenting, modelling and innovating with AI need infrastructure that flexes and scales on-demand. As AI programmes graduate from pilot to fully integrated deployments, enterprises will depend on scalability of their infrastructure to dial up capacity quickly and efficiently. More than handling steady growth however, we must be prepared to enable sudden spikes in bandwidth demand without disruption or delay.

Imagine: a financial trading company needs to temporarily scale its infrastructure to run a high-intensity synthetic data experiment, which requires them to generate millions of diverse datasets for model training and validation. The company relies on short-burst, compute-heavy workloads to stress-test AI systems under varied conditions and accelerate their product innovation. Meanwhile, a global retailer demands varying levels of network capacity and connectivity throughout the year as they anticipate fluctuations in demand due to seasonal and consumer behaviour patterns.

Without scalable infrastructure, these organisations are forced to endure lengthy procurement cycles and wasteful overprovisioning which inflates costs and carbon emissions and prolongs time-to-market.

CASE STUDY SNIPPET
UK retailers scale ai for the festive season
Scalable infrastructure proved essential for the 87% of UK retail decision-makers who deployed AI to manage Christmas and Black Friday peaks in 2025. [27] Retailers relied on elastic, cloud-native networks to handle sudden surges in compute and bandwidth demand, enabling real-time  personalisation, predictive inventory management and fraud detection without complexity or costly overprovisioning. More than half (56%) started using AI months ahead of the festive season, highlighting its central role in seasonal supply-chain planning and forecasting. By leveraging on-demand scalability of their AI, retail businesses can reduce stockouts, accelerate time-to-market and deliver seamless customer experiences during the year’s most critical trading period.

Scalability turns networks into programmable growth platforms which can add capacity, locations and workloads at the speed of AI initiatives. By enabling them to seamlessly scale across clouds, data centres and remote sites, enterprises can experiment, deploy and optimise with AI effortlessly. They must be able to dial network services up or down at the touch of a button to foster agility, accelerate time-to-market and ensure they can drive their business ahead as AI capabilities evolve.

"Scalability turns networks into programmable growth platforms which can add capacity, locations and workloads at the speed of AI initiatives."

5. latency

Real-time AI applications demand predictable, ultra-low latency. Milliseconds matter in the Inference Age when complex models and terabits of data must be processed at scale. Our traditional “low latency” networks simply cannot deliver the seamless digital experiences enterprises have come to expect.

A deterministic, intent-driven backbone is crucial to ensure that AI workloads – including training, inference and real-time data exchange – are prioritised and routed along optimal network paths. Networks with this foundation ensure responsive, reliable and efficient operations across data centres, multi-cloud and hybrid environments, which translates into seamless, near-real time AI for customers.

Edge inference is another critical enabler of low latency that moves processing closer to the data source – on embedded systems or nearby edge servers – allowing enterprises to accelerate the generation of insights and deliver results in real time. Today, many AI applications rely on cloud-based inference, where delays occur as data travels to the cloud for processing and bandwidth constraints slow the transfer and processing of large datasets. Inference at the network edge eliminates these constraints by keeping computation local, ensuring faster response times and improved performance for latency-sensitive AI workloads. Practically, this can mean a GPU-powered server on a factory floor orchestrating robotic cells, or an IoT device in a vehicle or retail store performing immediate analysis on sensor feeds to drive instantaneous actions. Other high-impact use cases include emergency response (see case study below), real-time object detection for self-driving cars and autonomous robots, predictive maintenance for large mechanical assets, time-series anomaly detection for operational resilience, and automated financial trading where microseconds translate to competitive advantage.

As total cost of ownership for edge-based inferencing decreases, and demand for real-time analytics and hyperautomation continues to climb, we must evolve our solutions to meet the performance expectations of customers. As an industry, we have a strategic opportunity to design, build and integrate high-performance edge solutions. Lightweight architectures and inference efficiency optimisation strategies will enable us to address the resource constraints of typical edge environments – such as limited memory, bandwidth, energy supply and computational capacity –while ensuring low-latency performance for next-generation applications.29

CASE STUDY SNIPPET
edge-based inferencing delivers life-saving speed
Edge-based inferencing is redefining what’s possible in high-stakes environments, where every millisecond can mean the difference between success and failure. In January 2025, University of Michigan researchers showcased this potential when a paralysed participant piloted a virtual drone using a brain-computer interface (BCI), decoding finger-intention signals with millisecond precision.[28] Now, imagine a first responder commanding a swarm of drones to locate survivors in a collapsed building and deliver critical medical supplies.  Neural signals are captured, interpreted by an AI model and translated into swarm commands – all at the network edge. Any delay could cause drones to misalign or collide with hazards, jeopardising lives and mission success. By combining multi-access edge computing with 5G/6G network slicing, inference happens locally, cutting round-trip latency to under five milliseconds. As a result, precise, synchronised drone behaviour and real-time operator feedback is realised, even in chaotic and unpredictable environments.

6. capillarity

Enterprises are under huge pressure to deliver instant insights and seamless digital experiences across geographies to remain competitive in the Inference Age. AI is rapidly shifting from centralised models to federated architectures, where data and workloads are distributed across multiple regions to meet performance, compliance and resilience demands. This evolution makes network reach and geographic diversity a strategic necessity for AI-driven enterprises – and we must deliver by ensuring capillarity of our networks.

Capillarity is delivered through the extensive reach of global networks and enables the deployment of AI workloads across various environments including multi-cloud platforms, data centres and private sites. Networks with enhanced capillary-like coverage are able to position data and applications closer to users and ensure that model training and inference workloads are routed optimally based on proximity, traffic patterns and resource availability. These capabilities enhance network performance which translates into faster innovation cycles and better customer experiences for end-users.

To capture performance demands and avoid being sidelined by hyperscalers and cloud providers that already offer distributed architectures, we must own the edge – expanding global network coverage and deploying infrastructure that enables localised, low-latency processing for the Inference Age. With 250 cloud on-ramps, 31,000 buildings, 1,100 data centres and SaaS peering points spanning multiple continents, Colt’s global network has the reach required for AI workloads to be delivered locally and seamlessly.

WALMART'S AMBIENT IOT DELIVERS SUPPLY CHAIN INTELLIGENCE
case study snippet
Walmart + technology company Wiliot have leveraged capillarity – distributing intelligence across every node of a nationwide network – to transform the supermarket giant’s supply chain at scale. By embedding millions of ambient-powered IoT Pixels into pallets across its US operations, Walmart is creating a real-time data fabric that feeds directly into AI systems for automated decision making. Battery-free sensors harvest energy from ambient sources and continuously stream data on location, temperature, humidity and dwell time. Walmart’s AI models ingest this pervasive, item-level data to predict spoilage risk, optimise cold-chain compliance and trigger proactive replenishment – all without human intervention. Capillarity will ensure that real-time intelligence is woven into 90 million pallets that are spread across 4,600 stores and more than 40 distribution centres.

7. responsability

Responsible AI is becoming a driver of business value, enabling innovation and differentiated customer experiences which spur growth in the Inference Age. Nearly 60% of executives say responsible AI boosts ROI and efficiency, and 55% report improvement in customer experience and innovation, according to a 2025 PwC survey.30

Responsibility in AI refers to the practices and processes that ensure AI is designed and deployed responsibly, builds trust and aligns with business goals. It demands fairness, transparency and accountability, alongside a commitment to security, sustainability, and core values to ensure the technology delivers a net benefit to people and the planet. As the enablers of AI, we have a significant role to play in ensuring AI and its supporting technology are designed and deployed responsibly. Our industry has an opportunity to claim responsible AI leadership and set the standard for customers, partners, our supply chain and other industries as we integrate AI.

"Our industry has an opportunity to claim responsible AI leadership and set the standard for customers, partners, our supply chain and other industries as we integrate AI."

Responsible AI leadership hinges on a people-first strategy. The interests of individuals must guide the ethical principles and governance we choose to embed across our organisations’ processes, technology and culture. Prioritising people at every level, and considering all possible impacts before development begins, will ensure AI benefits everyone. Without responsible AI principles and a people-first approach, we risk perpetuating inequality and harmful biases that present in AI's foundational datasets, which can slow progress towards a more productive, sustainable and equitable future. In its 2025 AI Inclusion report,31 Colt outlines five recommendations for embedding AI in a fair and inclusive way:

  1. Put people first and be clear
  2. Involve different people from the start
  3. Help employees learn and feel ready
  4. Use data fairly and watch for bias
  5. Plan for risks and be responsible.
CASE STUDY SNIPPET
Denmark's ai-powered welfare system risks social exclusion
The Danish government used AI-driven fraud detection algorithms to guide social security benefit distribution through Udbetaling Danmark (UDK). In 2024, Amnesty International found that these systems violated recipients’ rights to privacy, equality and social security, creating obstacles for marginalised groups such as women, older people and people with disabilities. UDK combined extensive personal data – including residency, citizenship, family circumstances, housing, employment, taxes, health and education – to profile residents and flag those deemed at high risk of benefits fraud. This approach likely meets the EU AI Act’s definition of prohibited “social scoring,” where individuals are evaluated in ways that can lead to harmful or unequal treatment. Weak oversight meant the algorithms risked reinforcing bias and excluding marginalised people from welfare access. To ensure equitable use of AI in Denmark and beyond, future AI systems must follow responsible, people‑centred design principles.

Environmental sustainability is a core component to responsible AI leadership. It is our responsibility to manage proliferating power consumption as next-generation model training, inference and intelligent agent deployment surge. Alarmingly, training OpenAI’s GPT-4 consumed 50 times more energy than its predecessor GPT-3, equating to 0.02% of the amount of electricity California generates in one year.33

As soon as 2028, AI models will account for 50% of IT greenhouse gas (GHG) emissions, up from approximately 10% in 2025. Sustainability of our infrastructure must be a priority as customers leverage even more power-hungry applications while we simultaneously strive towards ambitious Scope 1, 2 and 3 emissions reduction targets.

AI itself can be a powerful enabler of environmental sustainability, helping optimise network operations to reduce energy consumption and enhance resource efficiency. A compelling example is Colt’s efficient building management which saw more than a 26% reduction in operational energy used to regulate indoor climate conditions (see below). Another example is DeepMind’s machine learning system, which cut Google’s energy use for data centre cooling by 40%.34

Beyond networks, our industry can also leverage AI and data analytics to drive sustainability across the entire value chain, streamlining logistics to lower energy demand and support circular economy practices that minimise waste and maximise resource recovery.

CASE STUDY SNIPPET
Energy-efficient building management at Colt House
Colt uses AI to manage buildings more efficiently, cutting energy waste and improving indoor climate. Its Smart Building project aims for a fully autonomous, AI‑driven system that could reduce energy use and CO₂ emissions by up to 30%, while halving manual management work. Working with Nuuka, Colt created a platform combining its connectivity tech with Nuuka’s AI. Indoor Air Quality sensors track temperature, humidity, pressure and CO₂, while an AI/ML model sets real‑time heating, cooling and ventilation levels to maximise air quality with minimal energy use. A pilot in Colt’s London headquarters saved 8,400 kWh in six weeks, projecting 72,800 kWh annually—a 26.6% electricity reduction—while improving temperature, pressure and CO₂ stability. The next phase will extend AI control to heating and cooling, aiming to surpass the 30% energy‑ and emissions‑reduction target.

Crucially, responsible AI leadership also involves embracing AI for Good, which calls us to advance use cases that deliver positive net benefits for people and the planet. Examples include AI-driven climate modelling to accelerate decarbonisation, predictive healthcare systems that improve patient outcomes, accessibility tools that empower individuals with disabilities, fraud detection to protect consumers, and energy efficiency algorithms that reduce environmental impact.

Colt’s Smart Building Project (see above) leveraged AI to eliminate unnecessary operational energy waste at its head office in London, generating building electricity savings of over 26%. Our industry has also supported the use of AI to detect active wildfires,35 prevent blood poisoning,36 regenerate rainforests37 and restore coral reefs.38

To uphold AI for Good, we must continue to actively prioritise applications that create measurable social and environmental value –and incentivise partners, customers and suppliers to follow our example.

"AI for good vs good ai": omdia spotlights responsible ai leadership at colt

Colt is claiming responsible AI leadership with a focus on ‘AI for Good’. Omdia’s Senior Principal Analyst Roz Roseboro highlights Colt’s industry-leading approach to AI responsibility in an independent case study published November 2025, ‘Using AI to Address Strategic Opportunities at Colt’,39 which our industry can look to emulate and drive further to ensure the AI we enable delivers a net benefit to people and the planet. An excerpt of the case study reads:

“Colt views responsible AI through two critical lenses. ‘AI for Good’ encompasses use cases that generate positive social and environmental impact, such as building AI systems and infrastructure that help people in remote regions access medical treatments or developing solutions that address climate change challenges. ‘Good AI’ focuses on embedding best practices throughout AI lifecycles to manage risks that technology presents for social and environmental sustainability. This includes monitoring AI outputs for bias, selecting energy-efficient AI models and hardware, and implementing comprehensive oversight mechanisms throughout development and deployment phases."

Two key ‘AI for Good’ initiative areas at the company are as follows:

  • AI serves as a tool for environmental sustainability. Projects such as energy efficiency via AI-driven smart buildings and network resource optimization through AI wide area network (WAN) technologies use AI to reduce energy consumption in digital infrastructures.
  • AI enables scalable and safe digital infrastructure advancement. Particularly supporting functionality and safety requirements for future infrastructure needs, including increased resource demands and quantum technology integration.

8. SIMPLICITY

As we pursue a broader vision which centres customer experience, our mission is to make the provisioning and deployment of AI infrastructure effortless. Enterprises expect intuitive, intelligent networks that they can tap into seamlessly and scale and adapt to their requirements without complexity.

“Simplicity is about delivering everything –security, sovereignty, capacity, latency, scalability, capillarity and responsibility –at the touch of a button.”

Platformisation and the growth of as-a-service models demonstrate our industry’s commitment to simplifying enterprises’ network experience. By shifting from traditional infrastructure to flexible, consumption-based services and platforms – such as Network as a Service (NaaS) – we enable enterprises to effortlessly scale, innovate and integrate digital capabilities. Intelligent platforms, such as Colt’s award-winning On Demand NaaS platform, allow enterprise customers to buy, monitor and manage network resources in real-time, enabling businesses to scale dynamically without complex procurement processes.

Colt research found that 58% of the 1,500 CIOs it questioned said they were increasing their use of NaaS features due to growing AI demands.40

In 2026 and beyond, we will drive the next generation of NaaS to be intelligent, automated and outcome-focused – designed to deliver real-time performance, adaptability and autonomy for AI-driven enterprises.

L'extraordinaire tous les jours.

En nous attaquant aux complexités et aux frustrations du quotidien, nous éliminons le stress et faisons place à l'extraordinaire.