Azure capacity improvements during COVID-19

Since the Covid-19 outbreak there has been increased use of both Office 365 services (prticullary temas) and ofcourse cusotmer use of Azure.

by | Published: | Updated:

Since the Covid-19 outbreak there has been increased use of both Office 365 services (prticullary temas) and ofcourse cusotmer use of Azure.

This has lead to certain regions having capacity issues. I have personally experienced this in both Europe and US regions over the last few months.

Today, officials said Microsoft datacenter employees have been working in round-the-clock shifts to install new servers (while staying at least six feet apart). Microsoft added new servers first to the hardest-hit regions and installed new hardware racks 24 hours a day.

They also said Microsoft doubled capacity on one of its own undersea cables which carry data across the Atlantic, and “negotiated with owners of another to open up additional capacity.” Engineers tripled deployed capacity on the America Europe Connect cable in two weeks, they added.

At the same time, product teams looked across all of Microsoft’s services running on Azure to free up more capacity for highly demanded services like Teams, Office, Windows Virtual Desktop, Azure Active Directory’s Application Proxy, and Xbox, officials said. And in some cases, engineers rewrote code to improve efficiencies, as they did in the case of video-stream processing, which officials said they made 10 times more efficient over a weekend-long push.

Teams was made to spread its reserved capacity across additional datacenter regions within a week, rather than the multiple-month-long process that such a strategy would entail, officials said. In addition, Microsoft’s Azure Wide Area Network team added 110 terabits of capacity in two months to the fiberoptic network that carries Microsoft data, along with 12 new edge-computing sites to connect the network to infrastructure owned by local Internet providers to help reduce network congestion.

Microsoft also moved its own internal Azure workloads to avoid demand peaks worldwide and to divert traffic from regions experiencing high demand, officials said. On the consumer side, Microsoft also moved gaming workloads out of high-demand data centers in the UK and Asia and worked to decrease bandwidth usage during peak times of the day.

Microsoft added new routing strategies to leverage idle capacity. Calling and meeting traffic was routed across multiple regions to handle surges and time-of-day load balancing helped Microsoft avoid wide-area network throttling, officials said. Using Azure Front Door, Microsoft was able to route traffic at a country level. And it made a number of cache and storage improvements, which ultimately helped achieve a 65% reduction in payload size, a 40% reduction in deserialization time, and a 20% reduction in serialization time.

Quite an amazing operation!

Gill Gross

About the Author

Gill Gross - Azure Lead | Microsoft Azure P-TSP at U-BTech Solutions

Top Specialist in Azure, Cloud Tech, Microsoft Solutions and more.

comments powered by Disqus