Category Archives : Storage, Backup & Recovery



An update on the integration of Avere Systems into the Azure family

It has been three months since we closed on the acquisition of Avere Systems. Since that time, we’ve been hard at work integrating the Avere and Microsoft families, growing our presence in Pittsburgh and meeting with customers and partners at The National Association of Broadcasters Show.

It’s been exciting to hear how Avere has helped businesses address a broad range of compute and data challenges, helping produce blockbuster movies and life-saving drug therapies faster than ever before with hybrid and public cloud options. I’ve also appreciated having the opportunity to address our customers questions and concerns and thought it might be helpful to share the most common ones with the broader Azure/Avere community:

When will Avere be available on Microsoft Azure? We are on track to release Microsoft Avere vFXT to the Azure Marketplace later this year.  With this technology Azure customers will be able to run compute-intensive applications completely on Azure or to take advantage of our scale on an as-needed basis. Will Microsoft continue to support the Avere FXT physical appliance? Yes, we will continue to invest in, upgrade and support the Microsoft Avere FXT physical appliance, which customers tell us is particularly important for their on-premise and hybrid environments.




Blue-Green deployments using Azure Traffic Manager

Azure Traffic Manager, Azure’s DNS based load balancing solution, is used by customers for a wide variety of use cases including routing a global user base to endpoints on Azure that will give them the fastest, low latency experience, providing seamless auto-failover for mission critical workloads and migration from on-premises to the cloud. One key use case where customers leverage Traffic Manager is to make their software deployments smoother with minimal impact to their users by implementing a Blue-Green deployment process using Traffic Manager’s weighted round-robin routing method. This blog will show how we can implement Blue-Green deployment using Traffic Manager, but before we dive deep, let us discuss what we mean by Blue-Green deployment.

Blue-Green deployment is a software rollout method that can reduce the impact of interruptions caused due to issues in the new version being deployed. This is achieved by exposing the new version of the software to a limited set of users and expanding that user base gradually until everyone is using the new version. If at any time the new version is causing issues, for example a broken authentication workflow in the new version of a web application, all the users can be instantly* redirected




New updates for Microsoft Azure Storage Explorer

After the recent general availability for Storage Explorer, we also added new features in the latest 1.1 release to align with Azure Storage platform:

Azurite cross-platform emulator Access tiers that efficiently consumes resources based on how frequently a blob is accessed The removal of SAS URL start time to avoid datacenter synchronization issues

Storage Explorer is a great tool for managing contents of your Azure storage account. You can upload, download, and manage blobs, files, queues, and Cosmos DB entities. Additionally, you may gain easy access to manage your Virtual Machine disks, work with either Azure Resource Manager or classic storage accounts, plus manage and configure cross-origin resource sharing (CORS) rules. Storage Explorer also works on public Azure, Sovereign Azure Cloud, as well as Azure Stack.

Let’s go through some example scenarios where Storage Explorer helps with your daily job.

to your Azure Cloud from Storage Explorer

To get started using Storage Explorer, sign in to your Azure account and stay connected to your subscriptions. If you have an account for Azure, Azure Sovereign Cloud, or Azure Stack, you can easily sign-in to your account from Storage Explorer Add an Account dialog.

In addition, now Storage Explorer shares the




Customer success stories with Azure Backup: Somerset County Council

This is a continuation of our customer success story blog series for Azure Backup. In the previous case study we covered Russell Reynolds, here we will discuss how United Kingdom’s Somerset County Council are able to improve their backup and restore efficiency and reduce their backup cost using Azure Backup.

Customer background

United Kingdom’s Somerset County Council provides government services to its 550,000 residents. It is one of the oldest local governments in the world, established about 700 A.D. Somerset had been using an in-house storage manager platform for their data backup and restore on-premises.

“The biggest problems we had were with flexibility and scalability. We had racks and racks of disks, and we had to wait a long time to get new hardware. The complexities with the product itself also introduced many challenges” says Dean Cridland, Senior IT Officer at Somerset County Council. In addition, as the data footprint grew, IT staff struggled to hit daily backup SLA. So, they were looking for a modern backup solution that could meet their ever-growing data footprint requirement, meet their backup SLA, and aligns with their strategy of moving to the cloud.

How Azure Backup helped

Somerset deployed Azure Backup Server to




In case you missed it: 10 of your questions from our GDPR webinars

During the last few months, I’ve spoken with a lot of Azure customers, both in person and online, about how to prepare for the May 25, 2018 deadline for compliance with the EU’s General Data Protection Regulation (GDPR). The GDPR imposes new rules on companies, government agencies, non-profits, and other organizations that offer goods and services to people in the European Union (EU), or that collect and analyze data tied to EU residents. The GDPR applies no matter where you are located. The GDPR will dramatically shift the landscape for data collection and analysis, since under the GDPR, many practices that were commonplace will be forbidden, and companies must take care in assessing their exposure and how to comply.

I recently participated in a Microsoft series of webinars about the GDPR and its implications for IT teams and cloud computing. We got a lot of questions from the audience in these webinars, so I thought I would respond to some of the most frequently asked ones that we thought you might find helpful, along with links to the on-demand webinars.

Q: Does the GDPR allow me to send data outside the EU?

A: GDPR applies globally, so no matter




General availability: Azure Storage metrics in Azure Monitor

Azure Storage metrics in Azure Monitor, which was previously in public preview, is now generally available.

Azure Monitor is the platform service that provides a single source of monitoring data for Azure resources. With Azure Monitor, you can visualize, query, route, archive, and take action on the metrics and logs coming from resources in Azure. You can work with the data using the Monitor portal blade, the Azure Monitor Software Development Kits (SDKs), and through several other methods. Azure Storage is one of the fundamental services in Azure, and now you can chart and query storage metrics alongside other metrics in one consolidated view. For more information on how Azure Storage metrics are defined, you can see the documentation.

The features built on top of metrics are available differently per cloud:

Azure Monitor SDK (REST, .Net, Java & CLI): Available in all clouds Metric chart: Available in Public Cloud, and coming soon in Sovereign Clouds Alert: Available in Public Cloud, and coming soon in Sovereign Clouds

Meanwhile, the previous metrics become classic and are still supported. The following screenshot shows what the transition experience is. The Alerts and Metrics work on new metrics, and Alerts (classic), Metrics (classic), Diagnostic settings




AzCopy on Linux now generally available

Today we are announcing the general availability release of AzCopy on Linux. AzCopy is a command line data transfer utility designed to move large amounts of data to and from Azure Storage with optimal performance. It is designed to handle transient failures with automatic retries, as well as to provide a resume option for failed transfers. This general availability release includes new and enhanced features, as well as performance improvements thanks to the feedback we received during the Preview.

You can get started with the latest AzCopy release following the documentation.

What’s new? Throughput improvements up to 3X

Investments in performance improvements and leveraging .Net Core 2.1 have boosted the AzCopy throughput significantly. In our tests, we have seen up to three times the improvement in throughput for large, multiple files as well as up to two times the throughput improvement in scenarios where millions of small files are transferred.

Easy installation

AzCopy now packages .NET Core 2.1 thereby eliminating the need to manually install .NET Core as a pre-requisite. You can now extract the AzCopy package, and start using. You might however need to install the .NET Core dependencies in some Linux distributions. Please consult the documentation for the




OS Disk Swap for Managed Virtual Machines now available

Today, we are excited to announce the availability of the OS Disk Swap capability for VMs using Managed Disks. Until now, this capability was only available for Unmanaged Disks.

With this capability, it becomes very easy to restore a previous backup of the OS Disk or swap out the OS Disk for VM troubleshooting without having to delete the VM. To leverage this capability, the VM needs to be in stop deallocated state. After the VM is stop deallocated, the resource ID of the existing Managed OS Disk can be replaced with the resource ID of the new Managed OS Disk. You will need to specify the name of the new disk to swap. Please note that you cannot switch the OS Type of the VM i.e. Switch an OS Disk with Linux for an OS Disk with Windows

Here are the instructions on how to leverage this capability:

Azure CLI

To read more about using Azure CLI, see Change the OS disk used by an Azure VM using the CLI.

For CLI, use the full resource ID of the new disk to the –osdisk parameter

NOTE: required Azure CLI version > 2.0.25

az vm update -g swaprg




Storage scenarios for Cray in Azure
Storage scenarios for Cray in Azure

When you get a dedicated Cray supercomputer on your Azure virtual network, you also get attached Cray® ClusterStor™ storage. This is a great solution for the high-performance storage you need while running jobs on the supercomputer. But what happens when the jobs are done? That depends on what you’re planning to do. Azure has a broad portfolio of storage products and solutions.


Many times, you’re using your Cray supercomputer as part of a multi-stage workflow. Using the weather forecasting scenario we wrote about, after the modeling is done, it’s time to generate products. The most familiar setup for most HPC administrators would be to attach Azure Disks to a virtual machine and run a central file server or a fleet of Lustre servers.

But if your post-processing workload can be updated to use object storage, you get another option. Azure Blob Storage our object storage solution. It provides secure, scalable storage for cloud-native workloads. This allows your jobs to run at large scale without having to manage file servers.

Our recent acquisition of Avere Systems will bring another option for high-performance file systems. Avere’s technology will also enable hybrid setups, allowing you to move your data between on-premises and




Azure Service Fabric – announcing Reliable Services on Linux and RHEL support

Many customers are using Azure Service Fabric to build and operate always-on, highly scalable, microservice applications. Recently, we open sourced Service Fabric with the MIT license to increase opportunities for customers to participate in the development and direction of the product. Today, we are excited to announce the release of Service Fabric runtime v6.2 and corresponding SDK and tooling updates.

This release includes:

The general availability of Java and .NET Core Reliable Services and Actors on Linux Public preview of Red Hat Enterprise clusters Enhanced container support Improved monitoring and backup/restore capabilities

The updates will be available in all regions over the next few days and details can be found in the release notes

Reliable Services and Reliable Actors on Linux is generally available

Reliable Services and Reliable Actors are programming models to help developers build stateless and stateful microservices for new applications and for adding new microservices to existing applications. Now you can use your preferred language to build Reliable Services and Actors with the Service Fabric API using .NET Core 2.0 and Java 8 SDKs on Linux. 

You can learn more about this capability through Java Quickstarts and .NET Core Samples.

Red Hat Enterprise clusters in public preview

Azure Service Fabric clusters