Category Archives : Storage, Backup & Recovery



NFS 3.0 support for Azure Blob storage is now in preview

Many enterprise and organizations are moving their data to Microsoft Azure Blob storage for its massive scale, security capabilities, and low total cost of ownership. At the same time, they continue running many apps on different storage systems using the Network File System (NFS) protocol. Companies that use different storage systems due to protocol requirements are challenged by data silos where data resides in different places and requires additional migration or app rewrite steps.

To help break down these silos and enable customers to run NFS-based applications at scale, we are announcing the preview of NFS 3.0 protocol support for Azure Blob storage. Azure Blob storage is the only storage platform that supports NFS 3.0 protocol over object storage natively (no gateway or data copying required), with object storage economics, which is essential for our customers.

One of our Media and Entertainment (M&E) customers said, “NFS access to blob storage will enable our customers to preserve their legacy data access methods when migrating the underlying storage to Azure Blob storage.” Other customers have requested NFS for blob storage so they can reuse the same code from an on-premises solution to access files while controlling the overall cost of the solution. Financial




Advancing resilience through chaos engineering and fault injection

“When I first kicked off this Advancing Reliability blog series in my post last July, I highlighted several initiatives underway to keep improving platform availability, as part of our commitment to provide a trusted set of cloud services. One area I mentioned was fault injection, through which we’re increasingly validating that systems will perform as designed in the face of failures. Today I’ve asked our Principal Program Manager in this space, Chris Ashton, to shed some light on these broader ‘chaos engineering’ concepts, and to outline Azure examples of how we’re already applying these, together with stress testing and synthetic workloads, to improve application and service resilience.” – Mark Russinovich, CTO, Azure


Developing large-scale, distributed applications has never been easier, but there is a catch. Yes, infrastructure is provided in minutes thanks to your public cloud, there are many language options to choose from, swaths of open source code available to leverage, and abundant components and services in the marketplace to build upon. Yes, there are good reference guides that help give a leg up on your solution architecture and design, such as the Azure Well-Architected Framework and other resources in the Azure Architecture Center. But while application development




Announcing the general availability of Azure shared disks and new Azure Disk Storage enhancements

Organizations are changing how they run their businesses and many are looking to accelerate their move to the cloud to take advantage of the benefits that the cloud offers, including lower total cost of ownership (TCO) and improved flexibility and security, without sacrificing on performance, application compatibility, and availability. We are committed to delivering new innovations to help our customers easily migrate their business-critical applications to Azure.

Today, we are announcing the general availability of shared disks on Azure Disk Storage—enabling you to more easily migrate your existing on-premises Windows and Linux-based clustered environments to Azure. We are also announcing important new enhancements for Azure Disk Storage to provide you with more availability, security, and flexibility on Azure.

Azure shared disks general availability

With shared disks, Azure Disk Storage is the only shared block storage in the cloud that supports both Windows and Linux-based clustered or high-availability applications. This unique offering allows a single disk to be simultaneously attached and used from multiple virtual machines (VMs), enabling you to run your most demanding enterprise applications in the cloud, such as clustered databases, parallel file systems, persistent containers, and machine learning applications, without compromising on well-known deployment patterns for fast failover




Run high scale workloads on Blob storage with new 200 TB object sizes

Azure Blob storage is a massively scalable object storage solution that serves from small amounts to hundreds of petabytes of data per customer across a diverse set of data types, including logging, documents, media, genomics, seismic processing, and more. Read the Introduction to Azure Blob storage to learn more about how it can be used in a wide variety of scenarios.

Increasing file size support for Blob storage

Customers that have workloads on-premises today utilize files that are limited by the filesystem used with file size maximums up to exabytes in size. Most usage would not go up to the filesystem limit but do scale up to the tens of terabytes in size for specific workloads that make use of large files. We recently announced the preview of our new maximum blob size of 200 TB (specifically 209.7 TB), increasing our current limit of 5TB in size, which is a 40x increase! The increased size of over 200TB per object is much larger than other vendors that provide a 5TB max object size. This increase allows workloads that currently require multi-TB size files to be moved to Azure without additional work to break up these large objects.

This increase in




Minimize disruption with cost-effective backup and disaster recovery solutions on Azure

A top of mind concern among our customers is keeping their applications and data workloads running and recoverable in the case of unforeseen events or disasters. For example, COVID-19 has presented daunting challenges for IT, which are only compounded by growing threats from ransomware or setbacks related to technical or operational failure. These considerations further highlight the importance of a plan to ensure business continuity. IT admins are looking to cloud-based backup and disaster recovery solutions as part of their business continuity strategy because of the ability to quickly onboard, scale based on storage needs, remotely manage, and save costs by avoiding additional on-premises investments.

Azure provides native cloud solutions for customers to implement simple, secure and cost-effective business continuity and disaster recovery (BCDR) strategies for their applications and data whether they are on-premises or on Azure. Once enabled, customers benefit from minimal maintenance and monitoring overhead, remote management capabilities, enhanced security, and the ability to immutably recover services in a timely and orchestrated manner. Customers can also use their preferred backup and disaster recovery providers from a range of our partner solutions to extend their on-premises BCDR solutions to Azure.

All of this is possible without the need to




Achieve higher performance and cost savings on Azure with virtual machine bursting

Selecting the right combination of virtual machines (VMs) and disks is extremely important as the wrong mix can impact your application’s performance. One way to choose which VMs and disks to use is based on your disk performance pattern, but it’s not always easy. For example, a common scenario is unexpected or cyclical disk traffic where the peak disk performance is temporary and significantly higher than the baseline performance pattern. We frequently get asked by our customers, “should I provision my VM for baseline or peak performance?” Over-provisioning can lead to higher costs, while under-provisioning can result in poor application performance and customer dissatisfaction. Azure Disk Storage now makes it easier for you to decide, and we’re pleased to share VM bursting support on your Azure virtual machines.

Get short-term, higher performance with no additional steps or costs

VM bursting, which is enabled by default, offers you the ability to achieve higher throughput for a short duration on your virtual machine instance with no additional steps or cost. Currently available on all Lsv2-series VMs in all supported regions, VM bursting is great for a wide range of scenarios like handling unforeseen spiky disk traffic smoothly, or processing batched jobs with




Azure Files enhances data protection capabilities

Protecting your production data is critical for any business. That’s why Azure Files has a multi-layered approach to ensuring your data is highly available, backed up, and recoverable. Whether it’s a ransomware attack, a datacenter outage, or a file share that was accidentally deleted, we want to make sure you can get everything backed up and running again pronto. To give you a peace of mind with your data in Azure Files, we are enhancing features including our new soft delete feature, share snapshots, redundancy options, and access control to data and administrative functions.

Soft delete: a recycle bin for your Azure file shares

Soft delete protects your Azure file shares from accidental deletion. To this end, we are announcing the preview of soft delete for Azure file shares. Think of soft delete like a recycle bin for your file shares. When a file share is deleted, it transitions to a soft deleted state in the form of a soft deleted snapshot. You get to configure how long soft deleted data is recoverable for before it is permanently erased.

Soft-deleted shares can be listed, but to mount them or view their contents, you must undelete them. Upon undelete, the share




Azure Blob Storage enhancing data protection and recovery capabilities

Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. Today, we are announcing the general availability of Geo-Zone-Redundant Storage (GZRS)—providing protection against regional disasters and Account failover—allowing you to determine when to initiate a failover instead.

Additionally, we are releasing two new preview features: Versioning and Point in time restore. These new functionalities expand upon Azure Blob Storage’s existing capabilities such as data redundancy, soft delete, account delete locking, and immutable blobs, making our data protection and restore capabilities even better.

Geo-Zone-Redundant Storage (GZRS)

Geo-Zone-Redundant Storage (GZRS) and Read-Access Geo-Zone-Redundant Storage (RA-GZRS) are now generally available offering intra-regional and inter-regional high availability and disaster protection for your applications.

GZRS writes three copies of your data synchronously across multiple Azure Availability zones, similar to Zone redundant storage (ZRS), providing you continued read and write access even if a datacenter or availability zone is unavailable. In addition, GZRS asynchronously replicates your data to the secondary geo pair region to protect against regional unavailability. RA-GZRS exposes a read endpoint on this secondary replica allowing you to read data in the event of primary




Manage and find data with Blob Index for Azure Storage—now in preview


Blob Index—a managed secondary index, allowing you to store multi-dimensional object attributes to describe your data objects for Azure Blob storage—is now available in preview. Built on top of blob storage, Blob Index offers consistent reliability, availability, and performance for all your workloads. Blob Index provides native object management and filtering capabilities, which allows you to categorize and find data based on attribute tags set on the data.

Manage and find data with Blob Index

As datasets get larger, finding specific related objects in a sea of data can be difficult and frustrating. Previously, clients used the ListBlobs API to retrieve 5000 lexicographical records at a time, parse through the list, and repeat until you found the blobs you wanted. Some users also resorted to managing a separate lookup table to find specific objects. These separate tables can get out-of-sync—increasing cost, complexity, and frustration. Customers should not have to worry about data organization or index table management, and instead focus on building powerful applications to grow their business.

Blob Index alleviates the data management and querying problem with support for all blob types (Block Blob, Append Blob, and Page Blob). Blob Index is exposed through a familiar blob storage




Cross Region Restore (CRR) for Azure Virtual Machines using Azure Backup

Today we’re introducing the preview of Cross Region Restore (CRR) for Microsoft Azure Virtual Machines (VMs) support using Microsoft Azure Backup.

Azure Backup uses Recovery Services vault to hold customers’ backup data which offers both local and geographic redundancy. To ensure high availability of backed up data, Azure Backup defaults storage settings to geo-redundancy. By virtue, backed up data in the primary region is geo-replicated to an Azure-paired secondary region. If Azure declares a disaster in the primary region, the data replicated in the secondary region is available to restore in the secondary region only. With the introduction of this new feature, the customer can initiate restores in a secondary region at will to mitigate real downtime disaster in the primary region for their environment. This makes the secondary region restores completely customer-controlled. Azure Backup utilizes the backed-up data replicated to the secondary region for such restores.

For the following scenarios, customers can leverage the secondary region data mentioned above using this feature:

Full outage: Previously, if there was an Azure primary region disaster for the customer, the customer had to wait for Azure to declare disaster to access their secondary region data. With the cross region restore feature, there