Welcome to KubeCon North America! It seems only yesterday that we were together in San Diego. Though we’re farther apart physically this year, the Kubernetes community continues to go strong. Here in Azure, we’re thrilled to have seen how both our open-source efforts as well as the Azure Kubernetes Service have enabled people and companies like Finxact, Mars Petcare, and Mercedes Benz, to scale and transform in response to the COVID-19 pandemic.
In today’s environment, customers are looking to Azure and Kubernetes to enable application platforms and patterns that make it faster to build new applications and easier to iterate the applications that they’ve already built. Kubernetes on Azure is a reliable and secure foundation for this cloud-native application development. At the same time, the pressures of the current environment mean that it is also critical to be as efficient as possible and we are excited to see the ways that the Azure Kubernetes Service has empowered people to improve their operational and resource efficiency. Over the last few months our Microsoft teams have built amazing technology that enables our customers to be more efficient and I am excited to share some of that with you today.
Empowering people with
Customers around the world rely on Microsoft Azure to drive innovations related to our environment, public health, energy sustainability, weather modeling, economic growth, and more. Finding solutions to these important challenges requires huge amounts of focused computing power. Customers are increasingly finding the best way to access such high-performance computing (HPC) is through the agility, scale, security, and leading edge performance of Azure’s purpose-built HPC and AI cloud services.
Azure’s market-leading vision for HPC and AI is based on a core of genuine and recognized HPC expertise, using proven HPC technology and design principles, enhanced with the best features of the cloud. The result is a capability that delivers performance, scale, and value unlike any other cloud. This means applications scaling 12 times higher than other public clouds. It means higher application performance per node. It means powering AI workloads for one customer with a supercomputer fit to be among the top five in the world. And it means delivering massive compute power into the hands of medical researchers over a weekend to prove out life-saving innovations in the fight against COVID-19.
Big moments for Azure HPC and AI Supercomputing in 2020 OpenAI
http://azure.microsoft.com/blog/connecting-urban-environments-with-iot-and-digital-twins/Source: http://azure.microsoft.com/blog/connecting-urban-environments-with-iot-and-digital-twins/ As urbanization continues to take hold and cities face challenges to become more sustainable and livable, urban planning and operations strategies must adapt. The current pandemic has changed the way we live, accelerating cities’ future vision READ MORE
As urbanization continues to take hold and cities face challenges to become more sustainable and livable, urban planning and operations strategies must adapt. The current pandemic has changed the way we live, accelerating cities’ future vision as a necessity of the present and what it means to live in a connected and resilient urban environment. Now more than ever, public and private organizations are coming together to push transformative solutions and change the way we plan and operate infrastructure and urban environments for all.
Microsoft, along with its partner ecosystem, continues to be deeply engaged with cities and communities around the world by providing capabilities and solutions that span the intelligent cloud and edge, advancing of AI driven by ethical principles, and continuing commitment to trust and security. Earlier this year, IDC MarketScape recognized Microsoft as the leading worldwide IoT application platform for Smart Cities, highlighting its secure, mature, and capable Azure IoT, AI, and Digital Twins services. In addition to IDC, Guidehouse Insights also recognized Microsoft as the leader in its leaderboard for Smart Cities platform suppliers, highlighting Azure’s ability to support a broad portfolio of smart city solutions using common platform technologies. Last year we also shared
“Microsoft’s global network connects over 60 Azure regions, over 220 Azure data centers, over 170 edge sites, and spans the globe with more than 165,000 miles of terrestrial and subsea fiber. The global network connects to the rest of the internet via peering at our strategically placed edge points of presence (PoPs) around the world. Every day, millions of people around the globe access Microsoft Azure, Office 365, Dynamics 365, Xbox, Bing and many other Microsoft cloud services. This translates to trillions of requests per day and terabytes of data transferred each second on our global network. It goes without saying that the reliability of this global network is critical, so I’ve asked Principal Program Manager Mahesh Nayak and Principal Software Engineer Umesh Krishnaswamy to write this two-part post in our Advancing Reliability series. They explain how we’ve approached our network design, and how we’re constantly working to improve both reliability and performance.”—Mark Russinovich, CTO, Azure
In part one of this networking post, we presented the key design principles of our global network, explored how we emulate changes, our zero touch operations and change automation, and capacity planning.
A year ago during SC19, Azure unveiled the HBv2 clusters of virtual machines (VM) for high-performance computing (HPC). At the time, we characterized this uniquely powerful and scalable VM as “rivaling the most advanced supercomputers on the planet.” A bold claim for a cloud provider, to be sure. Since then we’ve endeavored to deliver on this promise. What we’ve been delighted to find is just how much our commitment to scalable HPC as a driver of innovation and creativity has resonated with customers and partners. Better still, they have inspired us to set the bar even higher. Most importantly, in this uniquely challenging year, we have been privileged to support our customers’ and partners’ most mission-critical and impactful work.
As Supercomputing 2020 (SC20) kicks off, we’d like to share some significant updates about Azure’s continued delivery of new supercomputing capabilities on our Azure H-series products. We’d also like to provide a sneak peek at a forthcoming addition to our Azure HPC portfolio.
86,400 cores for critical disease research
Azure is excited to announce it has achieved a new record for Message Passing Interface-based (MPI) HPC scaling on the public cloud. Running Nanoscale Molecular Dynamics (NAMD) across 86,400 central processing unit
Managing IT costs is critical during this time of economic uncertainty. The global pandemic is challenging organizations across the globe to reinvent business strategies and make operations more effective and productive. Faster than ever, you’ll need to find ways to increase efficiencies and optimize costs across your IT organizations.
When it comes to cloud cost optimization, organizations typically divide responsibilities between central IT departments and distributed workload teams. Central IT departments manage overall cloud strategy and governance, setting and auditing corporate policies for cost management. In compliance with central IT policy, workload teams across the organization assume end-to-end ownership for cloud applications they’ve built, including cost management.
In this new normal, if you’re a workload owner, it’s doubly challenging for you and your teams who are taking on new cost responsibilities daily, all while continuously adapting to working in a cloud environment. We created the Microsoft Azure Well-Architected Framework for you to help you design, build, deploy, and manage successful cloud workloads across five key pillars: security, reliability, performance efficiency, operational excellence, and cost optimization. While we’re uniquely focusing on cost optimization here, we’ll soon be addressing best practices on how to balance the priorities of your organization against the other four
The Linux and open-source landscapes are changing rapidly. With so many companies embracing remote work and operations this year, we’re seeing more organizations running large-scale, mission-critical Linux and open-source workloads than ever before.
IT teams need technical resources that can keep up—now. Even organizations that had already begun moving to the cloud are now finding that they need to accelerate and expand their cloud adoption. They need secure, scalable, and reliable cloud solutions that they can trust, but they also need to keep their costs down. Microsoft Azure checks all those boxes, and you can learn more at the Open Azure Day digital event.
We know that today’s Linux and open-source professionals want customizable cloud solutions they can use with the tools they already love. We know they need meaningful support from software vendors to streamline their cloud adoption while keeping their operations running smoothly as they make the move. And we know they don’t want to get locked into having to work with just one particular vendor.
Working in close collaboration with Linux partners, Azure provides this much-needed choice and flexibility in hybrid cloud deployments. In fact, Linux is the fastest-growing platform on Azure and accounts for
In May, we announced a groundbreaking partnership with Redis Labs to bring their Redis Enterprise software to Azure as a part of Azure Cache for Redis. We were humbled by the level of excitement and interest we received. We are announcing that you can now use Redis to tackle new challenges while making your caches larger and more resilient than ever before.
There has never been a more critical time for a technology like Redis. With billions of people working from home globally, web-based applications must be more responsive than ever, and enterprises both large and small need to be able to scale rapidly to meet unexpected demand. Solutions like Redis empower developers to optimize their data architectures and solve these problems. We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on Open Source Redis, as Azure customers have used Redis as a distributed cache, session store, and message broker. We’re excited to incorporate Redis Enterprise technology and make this solution even more powerful and available while also unlocking important new use cases for developers like search, deduplication, and time series analysis.
What is Redis Enterprise on Azure?
Microsoft and Redis Labs have partnered closely to
This post was co-authored by Adam Stuart, Technical Specialist, Azure Networking
Custom DNS, DNS proxy, and FQDN filtering in network rules (for non-HTTP/S and non-MSSQL protocols) in Azure Firewall are now generally available. In this blog, we also share an example use-case on using DNS proxy with Private Link.
Azure Firewall is a cloud-native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application, NAT, and network-level filtering and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.
Custom DNS support is now generally available
Since its launch in September 2018, Azure Firewall has been hardcoded to use Azure DNS to ensure the service can reliably resolve its outbound dependencies. Custom DNS allows you to configure Azure Firewall to use your own DNS server, while ensuring the firewall outbound dependencies are still resolved with Azure DNS. You may configure a single DNS server or multiple servers in Azure Firewall and Firewall Policy DNS settings.
Azure Firewall can also resolve names using Azure Private DNS. The Virtual Network within