Today, I am thrilled to announce the general availability of Global VNet Peering in all Azure public regions, empowering you to take the ease, simplicity, and isolation of VNet peering to the next level.
Azure’s Virtual Network (VNet) is a logical isolation of Azure which enables you to securely connect Azure resources to each other. VNet lets you create your own private space in Azure – your own network bubble, as I like to call it.
With Global VNet Peering available, you can enable connectivity across all Azure public regions without additional bandwidth restrictions and as always keeping all your traffic on the Microsoft Backbone. Global VNet Peering provides you with the flexibility to scale and control how workloads connect across geographical boundaries, unlocking and applying global scale to a plethora of scenarios such as data replication, database failover, and disaster recovery through private IP addresses. You can also share resources across different business unit VNets, the hub-and-spoke model, as we refer to it, through a global peering connection. As your organization grows across geographic boundaries, you can continue to share resources like firewalls or other virtual appliances via peering.
We are excited to announce the general availability of a new feature for Azure virtual machines (VMs) called Write Accelerator! Write Accelerator is a new disk capability that offers customers sub-millisecond writes for their disks. Write Accelerator is initially supported on M-Series VMs with Azure Managed Disks and Premium Storage. Write Accelerator is recommended for workloads that require highly performant updates, such as database transaction log writes. Write Accelerator is an exclusive functionality for Azure M-series virtual machines in recognition of the performance sensitive workload that is run with these types of high-end VMs. Technical details on enablement and restrictions can be found in our documentation.
Low latency, high transaction workloads – Write Accelerator, in conjunction with M-Series VMs on Managed Disks, is targeted towards database platforms that benefit from highly performant, transactional updates like SQL Server, Oracle, and SAP HANA. Write Accelerator is ideally suited where log file updates are required to persist in disk in a highly performant manner for modern databases. Write Accelerator disks offer the same reliability as Azure Premium Disks. In tests, customers reported factors of higher speed for disk writes into the performance and scalability of critical transactions and redo logs of
Today, we are excited to announce the availability of the OS Disk Swap capability for VMs using Managed Disks. Until now, this capability was only available for Unmanaged Disks.
With this capability, it becomes very easy to restore a previous backup of the OS Disk or swap out the OS Disk for VM troubleshooting without having to delete the VM. To leverage this capability, the VM needs to be in stop deallocated state. After the VM is stop deallocated, the resource ID of the existing Managed OS Disk can be replaced with the resource ID of the new Managed OS Disk. You will need to specify the name of the new disk to swap. Please note that you cannot switch the OS Type of the VM i.e. Switch an OS Disk with Linux for an OS Disk with Windows
Here are the instructions on how to leverage this capability:
To read more about using Azure CLI, see Change the OS disk used by an Azure VM using the CLI.
For CLI, use the full resource ID of the new disk to the –osdisk parameter
NOTE: required Azure CLI version > 2.0.25
az vm update -g swaprg
Today we’re sharing the public preview of per disk metrics for all Managed & Unmanaged Disks. This enables you to closely monitor and make the right disk selection to suit your application usage pattern. You can also use it to create alerts, diagnosis, and build automation.
Prior to this, you could get the aggregate metrics for all the disks attached to the virtual machine (VM), which provided limited insights into the performance characteristics of your application, especially if your workload is not evenly distributed across all attached disks. With this release, it is now very easy to drill down to a specific disk and figure out the performance characteristics of your workload.
Here are the new metrics that we’re enabling with today’s preview:
OS Disk Read Operations/Sec OS Disk Write Operations/Sec OS Disk Read Bytes/sec OS Disk Write Bytes/sec OS Disk QD Data Disk Read Operations/Sec Data Disk Write Operations/Sec Data Disk Read Bytes/sec Data Disk Write Bytes/sec Data Disk QD
The following GIF shows how easy it is to build a metric dashboard for a specific disk in the Azure portal.
Additionally, because of Azure Monitor integration with Grafana, it’s very easy to build a Grafana dashboard with these
I am proud to announce the general availability of Azure Container Instances (ACI) – a serverless way to run both Linux and Windows containers. ACI offers you an on-demand compute service delivering rapid deployment of containers with no VM management and automatic, elastic scale. When we released the preview last summer of ACI, it was the first-of-its-kind and fundamentally changed the landscape of container technology. It was the first service to deliver innovative serverless containers in the public cloud. As part of today’s announcement, I am also excited to announce new lower pricing, making it even less expensive to deploy a single container in the cloud. ACI also continues to be the fastest cloud-native option for customers in the cloud, getting you compute in mere seconds that also provide rich features like interactive terminals within running containers and an integrated Azure portal experience.
In addition to the ease-of-use and granular billing available with ACI, customers are choosing ACI as their serverless container solution because of its deep security model, protecting each individual container at a hyper-visor level which provides a strong security boundary for multi-tenant scenarios. It can sometimes be a challenge to secure multi-tenant workloads running
My high school physics teacher taught us about metal fatigue by having everyone bend paper clips back and forth until they broke. In the real world, engineers use computer simulations to test their designs. From the trivial paperclip to the life-saving crash analysis, computer-aided engineering (CAE) improves products around us every day. But accessing the massive power needed for these simulations can be tough for small organizations.
That’s where our partners at Altair have stepped in. Altair is democratizing access to CAE by building their Software-as-a-Service (SaaS) offerings on Microsoft Azure. In a case study we recently published, Altair describes how their HyperWorks Unlimited Virtual Appliance gives customers the combination of software and scale they need to quickly run their CAE workloads.
But that’s not the end of the story. Altair recently brought their Inspire software to a SaaS model as well. Inspire Unlimited provides a visual cloud collaboration platform for engineering. Inspire Unlimited attains the required scalability by onboarding multiple users on a virtual machine. Using Azure’s NV-series virtual machines, which feature NVIDIA Tesla M60 GPUs, Altair’s customers can get powerful virtual workstations without having to purchase expensive hardware. This allows users to collaborate with only a web browser,
Today, we are delighted to announce increased scale limits for Azure Backup. Users can now create as many as 500 recovery services vaults in each subscription per region as compared to the earlier limit of 25 vaults per region per subscription. Customers who have been hitting the vault limits due to a restriction of 25 vaults can now go ahead and create vaults to manage their resources better. In addition, the number of Azure virtual machines that can be registered against each vault has been increased to 1,000 from the earlier limit of 200 machines under each vault.
Key benefits Better management of resources between departments in an organization: Flexibility to create a large number of vaults under a subscription and large number of containers under a vault based on the departmental requirements without worrying about hitting vault limits. Better granularity in reporting and monitoring of data within vaults: Users can create separate vaults as per their requirements segregated based on organizational needs and get more granular reporting of backup usage on a per vault basis. Systematic and comprehensive billing: Users can get vault level detailed billing for a subscription for better financial management within an organization. Related links and
We are pleased to announce the general availability of Application Security Groups (ASG) in all Azure regions. This feature provides security micro-segmentation for your virtual networks in Azure.
Network security micro segmentation
ASGs enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses. Provides the capability to group VMs with monikers and secure applications by filtering traffic from trusted segments of your network.
Implementing granular security traffic controls improves isolation of workloads and protects them individually. If a breach occurs, this technique limits the potential impact of lateral exploration of your networks from hackers.
Security definition simplified
With ASGs, filtering traffic based on applications patterns is simplified, using the following steps:
Define your application groups, provide a moniker descriptive name that fits your architecture. You can use it for applications, workload types, systems, tiers, environments or any role. Define a single collection of rules using ASGs and Network Security Groups (NSG), you can apply a single NSG to your entire virtual network on all subnets. A single NSG gives you full visibility on your traffic policies, and a single place for management. Scale at your own pace. When you deploy
We introduced Azure Availability Zones during Microsoft Ignite as part of our continuing expansion of Azure’s support for the most demanding, mission-critical workloads. Today I’m excited to announce the general availability of Availability Zones beginning with select regions in the United States and Europe.
With Availability Zones, in addition to the broadest global coverage, Azure now offers the most comprehensive resiliency strategy in the industry from mitigating rack level failures with availability sets to protecting against large scale events with failover to separate regions. Within a region, Availability Zones increase fault tolerance with physically separated locations, each with independent power, network, and cooling.
For many companies, especially those in regulated industries, who are increasingly moving their mission-critical applications to the cloud, resiliency and business continuity have become a crucial focus. From online commerce systems in retail to customer-facing applications in financial services, the stakes are high for organizations and enterprises to deliver for their customers. Even a minor issue can have a major impact on a company’s brand reputation, customer satisfaction, and bottom line. In this environment, it’s imperative to develop applications with the highest operations standards anchored by a multi-layered resiliency approach.
“Availability Zones give us the combination of
Imagine you’re driving down the road. As long as the road is straight, you can see everything in front of you. But what happens when the road curves? Your brain makes assumptions based on past experiences to fill in what can’t be seen. For autonomous vehicles, the challenge is to give them the ability to make the same assumptions.
Our customer Elektronische Fahrwerksysteme GmbH (EFS) is working on that problem for a major auto manufacturer. We recently published a case study that describes how EFS uses NVIDIA GPUs in Microsoft Azure to analyze 2D images.
EFS had never applied deep learning to this kind of image processing before. Using Azure allowed them to quickly create a proof of concept environment. This allowed them to verify their algorithms and show value without having to make large upfront investments in time and capital.
“The innovative ideas we’ve implemented so far give us trust in a new deep learning architecture and in solutions that will rely on it,” EFS software developer Max Jesch said. “We proved that it’s possible to use deep learning to analyze roads. That is a really big deal. As far as we know, EFS is the first company to