This is a continuation of our customer success story blog series for Azure Backup. In the previous case study we covered Russell Reynolds, here we will discuss how United Kingdom’s Somerset County Council are able to improve their backup and restore efficiency and reduce their backup cost using Azure Backup.
United Kingdom’s Somerset County Council provides government services to its 550,000 residents. It is one of the oldest local governments in the world, established about 700 A.D. Somerset had been using an in-house storage manager platform for their data backup and restore on-premises.
“The biggest problems we had were with flexibility and scalability. We had racks and racks of disks, and we had to wait a long time to get new hardware. The complexities with the product itself also introduced many challenges” says Dean Cridland, Senior IT Officer at Somerset County Council. In addition, as the data footprint grew, IT staff struggled to hit daily backup SLA. So, they were looking for a modern backup solution that could meet their ever-growing data footprint requirement, meet their backup SLA, and aligns with their strategy of moving to the cloud.
How Azure Backup helped
Somerset deployed Azure Backup Server to
During the last few months, I’ve spoken with a lot of Azure customers, both in person and online, about how to prepare for the May 25, 2018 deadline for compliance with the EU’s General Data Protection Regulation (GDPR). The GDPR imposes new rules on companies, government agencies, non-profits, and other organizations that offer goods and services to people in the European Union (EU), or that collect and analyze data tied to EU residents. The GDPR applies no matter where you are located. The GDPR will dramatically shift the landscape for data collection and analysis, since under the GDPR, many practices that were commonplace will be forbidden, and companies must take care in assessing their exposure and how to comply.
I recently participated in a Microsoft series of webinars about the GDPR and its implications for IT teams and cloud computing. We got a lot of questions from the audience in these webinars, so I thought I would respond to some of the most frequently asked ones that we thought you might find helpful, along with links to the on-demand webinars.
Q: Does the GDPR allow me to send data outside the EU?
A: GDPR applies globally, so no matter
Azure Storage metrics in Azure Monitor, which was previously in public preview, is now generally available.
Azure Monitor is the platform service that provides a single source of monitoring data for Azure resources. With Azure Monitor, you can visualize, query, route, archive, and take action on the metrics and logs coming from resources in Azure. You can work with the data using the Monitor portal blade, the Azure Monitor Software Development Kits (SDKs), and through several other methods. Azure Storage is one of the fundamental services in Azure, and now you can chart and query storage metrics alongside other metrics in one consolidated view. For more information on how Azure Storage metrics are defined, you can see the documentation.
The features built on top of metrics are available differently per cloud:
Azure Monitor SDK (REST, .Net, Java & CLI): Available in all clouds Metric chart: Available in Public Cloud, and coming soon in Sovereign Clouds Alert: Available in Public Cloud, and coming soon in Sovereign Clouds
Meanwhile, the previous metrics become classic and are still supported. The following screenshot shows what the transition experience is. The Alerts and Metrics work on new metrics, and Alerts (classic), Metrics (classic), Diagnostic settings
Today we are announcing the general availability release of AzCopy on Linux. AzCopy is a command line data transfer utility designed to move large amounts of data to and from Azure Storage with optimal performance. It is designed to handle transient failures with automatic retries, as well as to provide a resume option for failed transfers. This general availability release includes new and enhanced features, as well as performance improvements thanks to the feedback we received during the Preview.
You can get started with the latest AzCopy release following the documentation.
What’s new? Throughput improvements up to 3X
Investments in performance improvements and leveraging .Net Core 2.1 have boosted the AzCopy throughput significantly. In our tests, we have seen up to three times the improvement in throughput for large, multiple files as well as up to two times the throughput improvement in scenarios where millions of small files are transferred.
AzCopy now packages .NET Core 2.1 thereby eliminating the need to manually install .NET Core as a pre-requisite. You can now extract the AzCopy package, and start using. You might however need to install the .NET Core dependencies in some Linux distributions. Please consult the documentation for the
Today, we are excited to announce the availability of the OS Disk Swap capability for VMs using Managed Disks. Until now, this capability was only available for Unmanaged Disks.
With this capability, it becomes very easy to restore a previous backup of the OS Disk or swap out the OS Disk for VM troubleshooting without having to delete the VM. To leverage this capability, the VM needs to be in stop deallocated state. After the VM is stop deallocated, the resource ID of the existing Managed OS Disk can be replaced with the resource ID of the new Managed OS Disk. You will need to specify the name of the new disk to swap. Please note that you cannot switch the OS Type of the VM i.e. Switch an OS Disk with Linux for an OS Disk with Windows
Here are the instructions on how to leverage this capability:
To read more about using Azure CLI, see Change the OS disk used by an Azure VM using the CLI.
For CLI, use the full resource ID of the new disk to the –osdisk parameter
NOTE: required Azure CLI version > 2.0.25
az vm update -g swaprg
When you get a dedicated Cray supercomputer on your Azure virtual network, you also get attached Cray® ClusterStor™ storage. This is a great solution for the high-performance storage you need while running jobs on the supercomputer. But what happens when the jobs are done? That depends on what you’re planning to do. Azure has a broad portfolio of storage products and solutions.
Many times, you’re using your Cray supercomputer as part of a multi-stage workflow. Using the weather forecasting scenario we wrote about, after the modeling is done, it’s time to generate products. The most familiar setup for most HPC administrators would be to attach Azure Disks to a virtual machine and run a central file server or a fleet of Lustre servers.
But if your post-processing workload can be updated to use object storage, you get another option. Azure Blob Storage our object storage solution. It provides secure, scalable storage for cloud-native workloads. This allows your jobs to run at large scale without having to manage file servers.
Our recent acquisition of Avere Systems will bring another option for high-performance file systems. Avere’s technology will also enable hybrid setups, allowing you to move your data between on-premises and
Many customers are using Azure Service Fabric to build and operate always-on, highly scalable, microservice applications. Recently, we open sourced Service Fabric with the MIT license to increase opportunities for customers to participate in the development and direction of the product. Today, we are excited to announce the release of Service Fabric runtime v6.2 and corresponding SDK and tooling updates.
This release includes:
The general availability of Java and .NET Core Reliable Services and Actors on Linux Public preview of Red Hat Enterprise clusters Enhanced container support Improved monitoring and backup/restore capabilities
The updates will be available in all regions over the next few days and details can be found in the release notes.
Reliable Services and Reliable Actors on Linux is generally available
Reliable Services and Reliable Actors are programming models to help developers build stateless and stateful microservices for new applications and for adding new microservices to existing applications. Now you can use your preferred language to build Reliable Services and Actors with the Service Fabric API using .NET Core 2.0 and Java 8 SDKs on Linux.
Red Hat Enterprise clusters in public preview
Azure Service Fabric clusters
We are happy to announce the Azure IaaS VM backup support for network restricted storage accounts. With storage firewalls and Virtual Networks, you can allow traffic only from selected virtual networks and subnets. With this you can create a secure network boundary for your unmanaged disks in storage accounts. You can also grant access for on-premises networks and other trusted internet traffic, by using network rules based on IP address ranges. With this announcement, we provide an ability for the user to perform and continue with scheduled and ad-hoc IaaS VM backups and restores for these VNET configured storage accounts.
After you configure firewall and virtual network settings for your storage account, select Allow trusted Microsoft services to access this storage account as an exception to enable Azure Backup service to access the network restricted storage account.
This network focused feature gives the customer a seamless experience by defining network access-based security. This ensures that only requests coming from approved Azure VNETs or specified public IP ranges will be allowed to a specific storage account making it more secure and thus fulfilling the compliance requirements of an organization.
Related links and additional content Learn more about
Today, we are delighted to announce increased scale limits for Azure Backup. Users can now create as many as 500 recovery services vaults in each subscription per region as compared to the earlier limit of 25 vaults per region per subscription. Customers who have been hitting the vault limits due to a restriction of 25 vaults can now go ahead and create vaults to manage their resources better. In addition, the number of Azure virtual machines that can be registered against each vault has been increased to 1,000 from the earlier limit of 200 machines under each vault.
Key benefits Better management of resources between departments in an organization: Flexibility to create a large number of vaults under a subscription and large number of containers under a vault based on the departmental requirements without worrying about hitting vault limits. Better granularity in reporting and monitoring of data within vaults: Users can create separate vaults as per their requirements segregated based on organizational needs and get more granular reporting of backup usage on a per vault basis. Systematic and comprehensive billing: Users can get vault level detailed billing for a subscription for better financial management within an organization. Related links and
Today we’re letting our customers know about our upcoming Data Subject Request (DSR) processing capability in the Azure portal, which will provide tenant admins a simple, powerful tool to quickly fulfill the Data Subject Requests that are central to compliance with the European Union General Data Protection Regulation (GDPR). We will fully support these DSR capabilities before May 25, 2018, the date when enforcement of the GDPR begins and when Microsoft has committed to be GDPR compliant across our cloud services.
The GDPR is the most significant change to EU privacy law in two decades and sets a new global standard for privacy rights, governing the handling and use of personal data. A fundamental tenet of the GDPR is the set of rights it grants individuals, or data subjects, in connection with their personal data collected by an organization (known as the data controller).
If your organization collects, hosts, or analyzes the personal data of EU residents, GDPR provisions require you to use data processors that guarantee their ability to implement the technical and organizational requirements of the GDPR. The GDPR also requires you to respond to requests from individuals, or data subjects, to receive a copy of their personal