We introduced Azure Availability Zones during Microsoft Ignite as part of our continuing expansion of Azure’s support for the most demanding, mission-critical workloads. Today I’m excited to announce the general availability of Availability Zones beginning with select regions in the United States and Europe.
With Availability Zones, in addition to the broadest global coverage, Azure now offers the most comprehensive resiliency strategy in the industry from mitigating rack level failures with availability sets to protecting against large scale events with failover to separate regions. Within a region, Availability Zones increase fault tolerance with physically separated locations, each with independent power, network, and cooling.
For many companies, especially those in regulated industries, who are increasingly moving their mission-critical applications to the cloud, resiliency and business continuity have become a crucial focus. From online commerce systems in retail to customer-facing applications in financial services, the stakes are high for organizations and enterprises to deliver for their customers. Even a minor issue can have a major impact on a company’s brand reputation, customer satisfaction, and bottom line. In this environment, it’s imperative to develop applications with the highest operations standards anchored by a multi-layered resiliency approach.
“Availability Zones give us the combination of
Imagine you’re driving down the road. As long as the road is straight, you can see everything in front of you. But what happens when the road curves? Your brain makes assumptions based on past experiences to fill in what can’t be seen. For autonomous vehicles, the challenge is to give them the ability to make the same assumptions.
Our customer Elektronische Fahrwerksysteme GmbH (EFS) is working on that problem for a major auto manufacturer. We recently published a case study that describes how EFS uses NVIDIA GPUs in Microsoft Azure to analyze 2D images.
EFS had never applied deep learning to this kind of image processing before. Using Azure allowed them to quickly create a proof of concept environment. This allowed them to verify their algorithms and show value without having to make large upfront investments in time and capital.
“The innovative ideas we’ve implemented so far give us trust in a new deep learning architecture and in solutions that will rely on it,” EFS software developer Max Jesch said. “We proved that it’s possible to use deep learning to analyze roads. That is a really big deal. As far as we know, EFS is the first company to
Ever since I started working on the Virtual Machine (VM) platform in Azure, there has been one feature request that I consistently hear customers asking for us to build. I don’t think words can describe how excited I am to announce that today we are launching the public preview of Serial Console access for both Linux and Windows VMs.
Managing and running virtual machines can be hard. We offer extensive tools to help you manage and secure your VMs, including patching management, configuration management, agent-based scripting, automation, SSH/RDP connectivity, and support for DevOps tooling like Ansible, Chef, and Puppet. However, we have learned from many of you that sometimes this isn’t enough to diagnose and fix issues. Maybe a change you made resulted in an fstab error on Linux and you cannot connect to fix it. Maybe a bcdedit change you made pushed Windows into a weird boot state. Now, you can debug both with direct serial-based access and fix these issues with the tiniest of effort. It’s like having a keyboard plugged into the server in our datacenter but in the comfort of your office or home.
Serial Console for Virtual Machines is available in all global regions starting
High-performance computing, artificial intelligence, and visualization GPUs have a wide variety of uses. That’s why Microsoft has partnered with NVIDIA to bring a wide variety of NVIDIA GPUs to Azure. Join us in San Jose next week at NVIDIA’s GPU technology conference to learn how Azure customers combine the flexibility and elasticity of the cloud with the capability of NVIDIA’s GPUs.
At Booth 603, Microsoft and partners will have demos of customer use cases and experts on hand to talk about how Azure is the cloud for any GPU workload. We will have demos from our partners at Altair, PipelineFX, and Workspot. In addition, you can learn about work we’ve done in oil & gas, automotive, and artificial intelligence.
Partner and customer sessions in the conference program include:
Transforming the AEC business with cloud workstations in Azure – Jimmy Chang (Workspot) Deploying machine learning on the oilfield: from the labs to the edge – Matthieu Boujonnier (Schneider Electric), Bartosz Boguslawski (Schneider Electric), Loryne Bissuel-Beauvais (Schneider Electric) Autodesk BIM Cloud Workspace on Azure (panel) – Frank Wolbertus (TBI), Adam Jull (IMSCAD Global), Marc Sleegers (Autodesk), Allen Furmanski (Citrix Systems) Identifying new therapeutics for Parkinson’s Disease using virtual neurons on an Azure-hosted
Today, we are excited to announce the support for backup of large disk VMs and set of improvements aimed at reducing the time taken for backup and restore. These set of improvements and large disk support is based on a new VM backup stack and are available for both managed and unmanaged disks. You can seamlessly upgrade to this new stack without any impact to your on-going backup jobs and there is no change to how you setup backup or restore.
This announcement combines multiple feature improvements:
Large disk support – Now you can backup VMs with disk sizes up to 4TB(4095GB), both managed and unmanaged. Instant recovery point – A recovery point is available as soon as the snapshot is done as part of the backup job. This eliminates the need to wait to trigger restore till data transfer phase of the backup is completed. This is particularly useful in scenarios where you want to apply a patch. Now you can go ahead with the patch once the snapshot phase is done and you can use the local snapshot to revert back if the patch goes bad. This is analogous to checkpoint solution offered by Hyper-V or VMware with
There is a new urgency for reaching oil more efficiently in a capital and risk intensive environment, especially with narrow margins around non-traditional exploration. The cost of offshore drilling for oil could be several hundred million dollars, with no guarantee of finding oil at all. On top of that, the high cost of data acquisition, drilling, and production reduces average profit margins to less than ten percent. Also, the expense and strict time limits of petroleum licenses impose a fixed time for exploration. This limit requires data acquisition, data processing, and interpretation of 3-D images with a limited amount of time to a solution envelope.
High performance computing (HPC) helps oil and gas companies accelerate ROI and minimize risk. This is done by providing engineers and geoscientists engaged in identifying and analyzing resource with the potential to map crucial project decisions. Azure provides true HPC on the cloud for customers in the oil and gas industry. Azure provides a broad range of compute resources to meet the needs of oil and gas workloads. This ranges from single-node jobs that use our compute optimized F-series virtual machines to tightly coupled many-node jobs that run on the H-series virtual machines, and all
Today we are pleased to announce two new Virtual Machine (VM) sizes, E64i_v3 and E64is_v3, which are isolated to hardware and dedicated to a single customer. These VMs are best suited for workloads that require a high degree of isolation from other customers for compliance and regulatory requirements. You can also choose to further subdivide the resources by using Azure support for nested VMs.
The E64i_v3 and E64is_v3 will have the exact same performance and pricing structure as their cousins E64_v3 and E64s_v3. These size additions will be available in each of the regions where E64_v3 and E64s_v3 are available today. The small letter ‘i’ in the VM name denotes that they are isolated sizes.
Unlike the E64_v3 and E64s_v3, the two new sizes E64i_v3 and E64is_v3 are hardware bound sizes. They will live and operate on our Intel® Xeon® Processor E5-2673 v4 2.3GHz hardware only and will be available until at least December 2021. We will provide reminders 12 months in advance of the official decommissioning of the sizes and offer an updated isolated size like these sizes on our next hardware version.
These two new E64i_v3 and E64is_v3 sizes will be available in the on-demand portal. Starting on
If you have Virtual Machines (VM) running in azure, you can take advantage of discounted pricing on Reserved Instances (RI) and pre-pay for your Virtual Machines. Microsoft consumption recommendation apis looks at your usage for seven, 30, or 60 days and recommends optimum configurations of Reserved Instances. It calculates the cost you would pay if you did not have RI and cost you will pay with RI optimizing your savings. The following example shows calculations that happen for 7 day recommendation but the same method is applied for calculating 30 or 60 day recommendations.
Let us assume your hourly windows VM usage for a specific SKU and region looks like the following graph (min. is 65 units and max. is 127 units) for seven days.
If you purchase 75 Reserved Instances, for hour 79, you will pay the following:
75 Instances of Reserved Instances. This will be pre-paid when you purchase RI. Reserved Instance covers the hardware cost of running VMs, so you will pay 75 hours of software only price as described in the document for Windows software costs not included with Reserved Instances. Since usage for this hour is 80, you will pay for five
Azure Automation provides the ability to automate, configure, and deploy updates across your hybrid environment using serverless automation. These capabilities are now generally available for all customers.
With the release of these new capabilities, you can now:
Get an inventory of operating system resources including installed applications and other configuration items. Get update compliance and deploy required fixes for Windows and Linux systems across hybrid environments. Track changes across services, daemons, software, registry, and files to promptly investigate issues.
These additional capabilities are now available from the Azure Resource Manager virtual machine (VM) experience as well as from the Automation account when managing at scale within the Azure portal.
Azure virtual machine integration
Integration with virtual machines enables update management, inventory, and change tracking for Windows and Linux computers directly from the VM blade.
With update management, you will always know the compliance status for Windows and Linux, and you can create scheduled deployments to orchestrate the installation of updates within a defined maintenance window. The ability to exclude specific updates is also available, with detailed troubleshooting logs to identify any issues during the deployment.
The inventory of your VM in-guest resources gives you visibility into installed applications as
We have heard from many customers that cloud security is one of their top concerns. Another thing we’ve heard from customers is that they want clarity around what they are responsible for securing in Azure and what Azure will do. Azure helps provide a highly secure foundation, built from the ground up, to host your infrastructure, applications, and data.
We understand the importance of protecting customer data, which is why we are committed to helping secure the datacenters that contain your data. Microsoft has invested over a billion dollars into security, including the physical security of the Azure platform, so you can devote your time and resources towards other business initiatives. Over the next few months, as part of the secure foundation blog series, we’ll discuss the components of physical, infrastructure (logical) and operational security that help make up Azure’s platform. Today, we are focusing on physical security.
Physical security refers to how Microsoft designs, builds and operates datacenters in a way that strictly controls physical access to the areas where customer data is stored. Our datacenters are certified to comply with the most comprehensive portfolio of internationally-recognized standards and certifications of any cloud service provider. We have an entire