Earlier this year, we made a commitment to shift to 100 percent renewable energy supply in our buildings and datacenters by 2025. On this journey, we recognize that how we track our progress is just as important as how we get there.
Today, we are announcing that Microsoft will be the first hyperscale cloud provider to track hourly energy consumption and renewable energy matching in a commercial product using the Vattenfall 24/7 Matching solution for our new datacenter regions in Sweden, which will be available in 2021.
Vattenfall and Microsoft are also announcing that the 24/7 hourly matching solution—the first commercial product of its kind—is now generally available. Vattenfall is a leading European energy company with a strong commitment to make fossil-free living possible within one generation. The solution is built using Microsoft’s Azure services, including Azure IoT Central and Microsoft Power BI.
Today’s announcement builds on last year’s partnership announcement with Vattenfall when the 24/7 Matching solution was first introduced. Since then, the solution has been in pilot in Vattenfall headquarters in Solna and the new Microsoft headquarters in Stockholm, which has seen 94 percent of the total office building energy consumption matched with Swedish wind and 6
http://azure.microsoft.com/blog/achieving-100-percent-renewable-energy-with-247-monitoring-in-microsoft-sweden/Source: http://azure.microsoft.com/blog/achieving-100-percent-renewable-energy-with-247-monitoring-in-microsoft-sweden/ Earlier this year, we made a commitment to shift to 100 percent renewable energy supply in our buildings and datacenters by 2025. On this journey, we recognize that how we track our progress is just as READ MORE
A need for hybrid and multicloud strategies for financial services
The financial services industry is a dynamic space that is constantly testing and pushing novel use cases of information technology. Many of its members must balance immense demands—from the pressures to unlock continuous innovation in a landscape with cloud-native entrants, to responding to unexpected surges in demand and extend services to new regions—all while managing risk and combatting financial crime.
At the same time, financial regulations are also constantly evolving. In the face of the current pandemic, we have seen our customers accelerate in their adoption of new technologies, including public cloud services, to keep up with evolving regulations and industry demands. Hand in hand with growing cloud adoption, we’ve also seen growing regulatory concerns over concentration risk (check out our recent whitepaper on this), which have resulted in new recommendations for customers to increase their overall operational resiliency, address vendor lock-in risks and require effective exit plans.
Further complicating matters, many financial services firms oversee portfolios of services that include legacy apps that have been in use for many years. These apps often cannot support the implementation newer capabilities that can accommodate mobile application support, business intelligence, and
It is imperative to monitor the health of your virtual machine. But how much time do you spend reviewing each metric and alert to monitor the health of a virtual machine?
We are announcing the preview of Azure Monitor for virtual machines guest health feature that monitors the health of your virtual machines and fires an alert when any parameter being monitored is outside the acceptable range. This feature provides you:
A simple experience to monitor the overall health of your virtual machine. Out-of-the-box health monitors based on key VM metrics to track the health of your virtual machine. Out-of-the-box alerts to notify if the virtual machine is unhealthy.
Virtual machine guest health feature has a parent-child hierarchical model. It monitors the health state of CPU, disks, and memory for a virtual machine and notifies the customer about the changes. The three states—healthy, warning, and critical—are defined based on the thresholds set by the customer for each child monitor. Each monitor measures the health of a particular component. The overall health of the virtual machine is determined by the health of its individual monitors. The top level monitor on the VM groups the health state of all the
If you have opted for Azure Database for PostgreSQL server, you are probably looking for a fully managed, intelligent, and flexible cloud database service that enables you to focus on building applications while offloading critical management tasks such as availability, scalability, and data protection to the service provider. However, some of these tasks—backup being a case in point—may have additional requirements pertaining to your organization’s compliance and business needs that call for a specialized, end-to-end solution.
Azure Backup and Azure Databases have come together to build an enterprise-scale backup solution for Azure Database for PostgreSQL that facilitates flexible and granular backups and restores while supporting retention for up to 10 years. It is an elastic-scale, zero-infrastructure solution that does not require you to deploy or manage backup infrastructure, agents, or storage accounts while providing a simple and consistent experience to centrally manage and monitor the backups.
Enhanced capabilities from Azure Backup and Azure Databases Long-term retention in standard or archive tier
Retain backups for up to 10 years in the standard or archive tier according to your compliance and audit needs with recovery points pruned automatically by the built-in lifecycle management capability beyond the specified retention duration.
Customer-controlled, granular backup and restore
Lower your deployment cost, while improving client performance with Server Message Block (SMB) Multichannel on premium tier.
Today, we are announcing the preview of Azure Files SMB Multichannel on premium tier. SMB 3.0 introduced the SMB Multichannel technology in Windows Server 2012 and Windows 8 client. This feature allows SMB 3.x clients to establish multiple network connections to SMB 3.x servers for greater performance over multiple network adapters and over network adapter with Receive Side Scaling (RSS) enabled. With this preview release, Azure Files SMB clients can now take advantage of SMB Multichannel technology with premium file shares in the cloud.
SMB Multichannel allows multiple connections over the optimum network path that allows for increased performance from parallel processing. The increased performance is achieved by bandwidth aggregation over multiple NICs or with NIC support for Receive Sides Scaling (RSS) that enables distributed IOs across multiple CPUs and dynamic load balance.
Benefits of Azure Files SMB Multichannel include: Higher throughput: Makes this feature suitable for applications with large files with large IOs such as media & entertainment for content creation/transcoding, genomics, and financial services risk analysis. Increased IOPS: This is particularly useful for small IO
Coinciding with this week’s Kubecon and Open Azure Day virtual events, today we’re announcing the general availability of Azure Hybrid Benefit functionality for Linux customers, allowing you to bring both your on-premises Windows Server and SQL Server licenses, as well as Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES) subscriptions to Azure.
During the preview period, over 1,500 Linux virtual machines have already been migrated to Azure using the new Azure Hybrid Benefit capabilities, helping to significantly reduce the costs of running enterprise Linux workloads in Azure.
While previous Bring-Your-Own-Subscription cloud migration options available to Red Hat and SUSE customers allowed them to use their pre-existing RHEL and SLES subscriptions in the cloud, Azure Hybrid Benefit improves upon this with several capabilities that are unique to Azure and makes enterprise Linux cloud migration even easier than before:
Applies to all Red Hat Enterprise Linux and SUSE Linux Enterprise Server pay-as-you-go images available in the Azure Marketplace or Azure Portal. You don’t need to provide your own image. Save time with seamless post-deployment conversions—there’s no need for production redeployment. You can simply convert the pay-as-you-go images you used during your proof of concept testing to bring-your-own-subscription
Welcome to KubeCon North America! It seems only yesterday that we were together in San Diego. Though we’re farther apart physically this year, the Kubernetes community continues to go strong. Here in Azure, we’re thrilled to have seen how both our open-source efforts as well as the Azure Kubernetes Service have enabled people and companies like Finxact, Mars Petcare, and Mercedes Benz, to scale and transform in response to the COVID-19 pandemic.
In today’s environment, customers are looking to Azure and Kubernetes to enable application platforms and patterns that make it faster to build new applications and easier to iterate the applications that they’ve already built. Kubernetes on Azure is a reliable and secure foundation for this cloud-native application development. At the same time, the pressures of the current environment mean that it is also critical to be as efficient as possible and we are excited to see the ways that the Azure Kubernetes Service has empowered people to improve their operational and resource efficiency. Over the last few months our Microsoft teams have built amazing technology that enables our customers to be more efficient and I am excited to share some of that with you today.
Empowering people with
Customers around the world rely on Microsoft Azure to drive innovations related to our environment, public health, energy sustainability, weather modeling, economic growth, and more. Finding solutions to these important challenges requires huge amounts of focused computing power. Customers are increasingly finding the best way to access such high-performance computing (HPC) is through the agility, scale, security, and leading edge performance of Azure’s purpose-built HPC and AI cloud services.
Azure’s market-leading vision for HPC and AI is based on a core of genuine and recognized HPC expertise, using proven HPC technology and design principles, enhanced with the best features of the cloud. The result is a capability that delivers performance, scale, and value unlike any other cloud. This means applications scaling 12 times higher than other public clouds. It means higher application performance per node. It means powering AI workloads for one customer with a supercomputer fit to be among the top five in the world. And it means delivering massive compute power into the hands of medical researchers over a weekend to prove out life-saving innovations in the fight against COVID-19.
Big moments for Azure HPC and AI Supercomputing in 2020 OpenAI
“Microsoft’s global network connects over 60 Azure regions, over 220 Azure data centers, over 170 edge sites, and spans the globe with more than 165,000 miles of terrestrial and subsea fiber. The global network connects to the rest of the internet via peering at our strategically placed edge points of presence (PoPs) around the world. Every day, millions of people around the globe access Microsoft Azure, Office 365, Dynamics 365, Xbox, Bing and many other Microsoft cloud services. This translates to trillions of requests per day and terabytes of data transferred each second on our global network. It goes without saying that the reliability of this global network is critical, so I’ve asked Principal Program Manager Mahesh Nayak and Principal Software Engineer Umesh Krishnaswamy to write this two-part post in our Advancing Reliability series. They explain how we’ve approached our network design, and how we’re constantly working to improve both reliability and performance.”—Mark Russinovich, CTO, Azure
In part one of this networking post, we presented the key design principles of our global network, explored how we emulate changes, our zero touch operations and change automation, and capacity planning.