How the cloud took over the cables

Insights Tech Trends


When the cloud emerged as a concept, it referred to computer services delivered remotely. Bandwidth was cheap, and it made sense to outsource and centralise computing. Physical location, and the physical network, seemed to have become irrelevant. This has turned out not to be true...

Developments like streaming content, the Internet of Things and (soon) driverless cars are pushing the development of “edge computing" - which links the final users and devices to the Internet, and will require masses of cellular network bandwidth.

Meanwhile, those cloud services have burgeoned, and webscale giants like Microsoft, Google and Facebook run system so large they need their own network infrastructure.

The number of users and the amount of data consumed is growing so fast that more than 100,000 km of new cable is laid each year. The cloud giants want to make sure they have their share of that, so they are footing the bill.

Early in 2018, two international cable giants, Aqua Comms and TESubCom teamed up to propose Havfrue, a cable going across the Atlantic, from New Jersey to Ireland and Denmark. The funding comes from a number of companies, including Facebook and Google.

In 2016, Google and Facebook both invested in the Pacific Light Cable Network (PLCN), a cable stretching across the Pacific from Hong Kong to Los Angeles. Built by TESubCom, it should be operational this summer.

Meanwhile, the Marea cable was completed in 2017 , from Bilbao in Spain to Virginia Beach, USA. Microsoft and Facebook each own 25 percent of the cable, the rest belonging to Telxius, a subsidiary of Spain’s Telefonica.

When the cloud players started out, they just rented capacity on existing cables. Why the change? It’s because their sheer scale means the giants have to build the infrastructure.

A new giant data center often needs more electrical power than the local grid can readily provide, so the giants often pay for new generation capacity - usually renewable. For the same reason, webscale players have to foot the bill for new cables. TeleGeography reckons that the world will have spent $9.2bn on international cable projects between 2016 and 2018. These companies are among the few that can pay that kind of money.

It’s also a simple necessity. Amazon, Google and the rest have done a great job of delivering on the promises of cloud computing, but our expectations have risen. We expect near-instant responses from services. We don’t care if they are in Sweden, the UK or Northern Virginia.

The only way the big players can cater to our demands is to own ever more long distance capacity, so they can be sure that the packets used by their service get handled quicker than their rivals.


Written by Peter Judge (Guest)

See Peter Judge (Guest)'s blog

Peter Judge is the Global Editor at Datacenter Dynamics. His main interests are networking, security, mobility and cloud. You can follow Peter at: @judgecorp

Related blogs

US CLOUD Act raises new data privacy issues

At the end of March, Donald Trump signed into law a $1.3 trillion spending bill that covered a vast range of policy areas. The 2,232-page bill ensured that the US Government would not shut down – at least until September – but it also provided an excellent opportunity for legislators to add other measures to the ‘omnibus’ bill, which, according to Senator Rand Paul, was passed without anyone having read the whole thing.

Read more


Next generation energy storage solutions: An emerging option for enhancing data center reliability

For years, data centers have been haunted by the threat of power outages and the associated costs of such events. This situation is getting worse, with the most recent numbers from a 2016 report by the Ponemon Institute indicating that the average costs of a data center outage rose from $505,500 in 2010 to over $740,000 in 2015, while the maximum cost increased from $1.0 million to $2.4 million. How can next generation energy storage solutions help?

Read more


Deploying Performant Parallel Filesystems - Ansible and BeeGFS

BeeGFS is a parallel file system suitable for High Performance Computing (HPC) with a proven track record in scalable storage solution space. In this blog hosted by Verne Global, we explore how different components of BeeGFS are pieced together and how we have incorporated them into an Ansible role for a seamless storage cluster deployment experience.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.