Category Archives : Storage, Backup & Recovery

29

Mar

Azure Search – New Storage Optimized service tiers available in preview

Azure Search is an AI-powered cloud search service for modern mobile and web app development. Azure Search is the only cloud search service with built-in artificial intelligence (AI) capabilities that enrich all types of information to easily identify and explore relevant content at scale. It uses the same integrated Microsoft natural language stack as Bing and Office, plus prebuilt AI APIs across vision, language, and speech. With Azure Search, you spend more time innovating on your websites and applications, and less time maintaining a complex search solution.

Today we are announcing the preview of two new service tiers for Storage Optimized workloads in Azure Search. These L-Series tiers offer significantly more storage at a reduced cost per terabyte when compared to the Standard tiers, ideal for solutions with a large amount of index data and lower query volume throughout the day, such as internal applications searching over large file repositories, archival scenarios when you have business data going back many years, or e-discovery applications.     

Searching over all your content

From finding a product on a retail site to looking up an account within a business application, search services power a wide range of solutions with differing needs. While some scenarios

Share

28

Mar

Resource governance in Azure SQL Database

This blog post continues the Azure SQL Database architecture series where we share background on how we run the service, as described by the architects who originally created the service. The first two posts covered data integrity in Azure SQL Database and how cloud speed helps SQL Server database administrators. In this blog post, we will talk about how we use governance to help achieve a balanced system.

Allocated and governed resources

When you choose a specific Azure SQL Database service tier, you are selecting a pre-defined set of allocated resources across several dimensions such as CPU, storage type, storage limit, memory, and more. Ideally you will select a service tier that meets the workload demands of your application, however if you over or under-size your selection you can easily scale up or down accordingly.

With each service tier selection, you are also inherently selecting a set of resource usage boundaries or limits. For example, a business critical, Gen 4 database with eight cores has the following resource allocations and associated limits:

Compute size BC_Gen4_8 Memory (GB) 56 In-memory OLTP storage (GB) 8 Storage type Local SSD Max data size (GB) 650 Max log size (GB) 195 TempDB size (GB)

Share

27

Mar

Azure Blob Storage lifecycle management generally available

Data sets have unique lifecycles. Some data is accessed often early in the lifecycle, but the need for access drops drastically as the data ages. Some data remains idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation while other data sets are actively read and modified throughout their lifetimes.

Today we are excited to share the general availability of Blob Storage lifecycle management so that you can automate blob tiering and retention with custom defined rules. This feature is available in all Azure public regions.

Lifecycle management

Azure Blob Storage lifecycle management offers a rich, rule-based policy which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle.

Lifecycle management policy helps you:

Transition blobs to a cooler storage tier such as hot to cool, hot to archive, or cool to archive in order to optimize for performance and cost Delete blobs at the end of their lifecycles Define up to 100 rules Run rules automatically once a day Apply rules to containers or specific subset of blobs, up to 10 prefixes per rule

To learn more visit our

Share

26

Mar

Blob storage interface on Data Box is now generally available

The blob storage interface on the Data Box has been in preview since September 2018 and we are happy to announce that it’s now generally available. This is in addition to the server message block (SMB) and network file system (NFS) interface already generally available on the Data Box.

The blob storage interface allows you to copy data into the Data Box via REST. In essence, this interface makes the Data Box appear like an Azure storage account. Applications that write to Azure blob storage can be configured to work with the Azure Data Box in exactly the same way. 

This enables very interesting scenarios, especially for big data workloads. Migrating large HDFS stores to Azure as part of a Apache Hadoop® migration is a popular ask. Using the blob storage interface of the Data Box, you can now easily use common copy tools like DistCp to directly point to the Data Box, and access it as though it was another HDFS file system! Since most Hadoop installations come pre-loaded with the Azure Storage driver, most likely you will not have to make changes to your existing infrastructure to use this capability. Another key benefit of migrating via the blob storage

Share

25

Mar

Azure Premium Block Blob Storage is now generally available

As enterprises accelerate cloud adoption and increasingly deploy performance sensitive cloud-native applications, we are excited to announce general availability of Azure Premium Blob Storage. Premium Blob Storage is a new performance tier in Azure Blob Storage for block blobs and append blobs, complimenting the existing Hot, Cool, and Archive access tiers. Premium Blob Storage is ideal for workloads that require very fast response times and/or high transactions rates, such as IoT, Telemetry, AI, and scenarios with humans in the loop such as interactive video editing, web content, online transactions, and more.

Premium Blob Storage provides lower and more consistent storage latency, providing low and consistent storage response times for both read and write operations across a range of object sizes, and is especially good at handling smaller blob sizes. Your application should be deployed to compute instances in the same Azure region as the storage account to realize low latency end-to-end. For more details on performance see, “Premium Block Blob Storage – a new level of performance.”

Figure 1 – Latency comparison of Premium and Standard Blob Storage

Premium Blob Storage is available with Locally-Redundant Storage (LRS) and comes with High-Throughput Block Blobs (HTBB), which provides very high and

Share

25

Mar

Azure Storage support for Azure Active Directory based access control generally available

We are pleased to share the general availability of Azure Active Directory (AD) based access control for Azure Storage Blobs and Queues. Enterprises can now grant specific data access permissions to users and service identities from their Azure AD tenant using Azure’s Role-based access control (RBAC).  Administrators can then track individual user and service access to data using Storage Analytics logs. Storage accounts can be configured to be more secure by removing the need for most users to have access to powerful storage account access keys.

By leveraging Azure AD to authenticate users and services, enterprises gain access to the full array of capabilities that Azure AD provides, including features like two-factor authentication, conditional access, identity protection, and more. Azure AD Privileged Identity Management (PIM) can also be used to assign roles “just-in-time” and reduce the security risk of standing administrative access.

In addition, developers can use Managed identities for Azure resources to deploy secure Azure Storage applications without having to manage application secrets.

When Azure AD authentication is combined with the new Azure Data Lake Storage Gen 2 capabilities, users can also take advantage of granular file and folder access control using POSIX-style access permissions and access control lists

Share

18

Mar

Azure Backup for SQL Server in Azure Virtual Machines now generally available!

How do you back up your SQL Servers today? You could be using backup software that require you to manage backup servers, agents, and storage, or you could be writing elaborate custom scripts which need you to manage the backups on each server individually. With the modernization of IT infrastructure and the world rapidly moving to the cloud, do you want to continue using the legacy backup methods that are tedious, infrastructure-heavy, and difficult to scale? Azure Backup for SQL Server Virtual Machines (VMs) is the modern way of doing backup in cloud, and we are excited to announce that it is now generally available! It is an enterprise scale, zero-infrastructure solution that eliminates the need to deploy and manage backup infrastructure while providing a simple and consistent experience to centrally manage and monitor the backups on standalone SQL instances and Always On Availability Groups.

 

Built into Azure, the solution combines the core cloud promises of simplicity, scalability, security and cost effectiveness with inherent SQL backup capabilities that are leveraged by using native APIs, to yield high fidelity backups and restores. The key value propositions of this solution are:

15-minute Recovery Point Objective (RPO): Working with uber critical

Share

13

Mar

Simplify disaster recovery with Managed Disks for VMware and physical servers

Azure Site Recovery (ASR) now supports disaster recovery of VMware virtual machines and physical servers by directly replicating to Managed Disks. Beginning in March 2019 and moving forward, all new protections have this capability available on the Azure portal. In order to enable replication for a machine, you no longer need to create storage accounts. You can now write replication data directly to a type of Managed Disk. The choice of Managed Disk type should be based on the data change rate on your source disks. Available options are Standard HDD, Standard SSD and Premium SSD.

Please note, this change will not impact the machines which are already in a protected state. They will continue to replicate to storage accounts. However, you can still choose to use Managed Disks at the time of failover by updating the settings in compute and network blade.

There are benefits in writing to Managed Disks:

Hassle free management of capacity on Microsoft Azure: You don’t need to track and manage multiple target storage accounts anymore. ASR will create the replica disks at the time of enabling replication. An Azure Managed Disk is created for every virtual machine (VM) disk at on-premises. This is

Share

12

Mar

Stay informed about service issues with Azure Service Health

When your Azure resources go down, one of your first questions is probably, “Is it me or is it Azure?” Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance.

In this blog, we’ll cover how you can use Azure Service Health’s personalized dashboard to stay informed about issues that could affect you now or in the future.

Monitor Azure service issues and take action to mitigate downtime

You may already be familiar with the Azure status page, a global view of the health of all Azure services across all Azure regions. It’s a good reference for major incidents with widespread impact, but we recommend using Azure Service Health to stay informed about Azure incidents and maintenance. Azure Service Health only shows issues that affect you, provides information about all incidents and maintenance, and has richer capabilities like alerting, shareable updates and RCAs, and other guidance and support.

Azure Service Health tracks three types of health events that may impact you:

Service issues: Problems in Azure services that affect you right now. Planned maintenance: Upcoming maintenance that

Share

12

Mar

AzCopy support in Azure Storage Explorer now available in public preview

We are excited to share the public preview of AzCopy in Azure Storage Explorer. AzCopy is a popular command line utility that provides performant data transfer into and out of a storage account. The new version of AzCopy further enhances the performance and reliability through a scalable design, where concurrency is scaled up according to the number of machine’s logical cores. The tool’s resiliency is also improved by repeated retries.

Azure Storage Explorer provides the UI interface for various storage tasks, and now it supports using AzCopy as a transfer engine to provide the highest throughput for transferring your files for Azure Storage. This capability is available today as a preview in Azure Storage Explorer.

Enable AzCopy for blob upload and download

We have heard from many of you that the performance of your data transfer matters. Let’s be honest, we all have better things to do than wait around for files to be transferred to Azure. Now with AzCopy in Azure Storage Explorer, we give you all that time back!

With AzCopy preview, the blob operations will be faster than before. To enable this option, go to the Preview menu and select Use AzCopy for improved blob Upload and

Share