Category Archives : Storage, Backup & Recovery

03

Dec

Extended filesystem programming capabilities in Azure Data Lake Storage

Since the general availability of Azure Data Lake Storage Gen2 in February 2019, customers have been getting insights at cloud scale faster than ever before. Integration to analytics engines is critical for their analytics workloads and equally important is the ability to programmatically ingest, manage, and analyze data. This ability is critical for key areas of enterprise data lakes such as data ingestion, event-driven big data platforms, machine learning, and advanced analytics. Programmatic access is possible today using Azure Data Lake Storage Gen2 REST APIs or Blob REST APIs. In addition, customers can enable continuous integration and continuous delivery (CI/CD) pipelines using Blob PowerShell and CLI capabilities via multi-protocol access. As part of the journey to enable our developer ecosystem, our goal is to make customer application development easier than ever before.

We are excited to announce the public preview of .NET SDK, Python SDK, Java SDK, PowerShell, and CLI for filesystem operations for Azure Data Lake Storage Gen2. Customers who are used to the familiar filesystem programming model can now implement this model using .NET, Python, and Java SDKs. Customers can also now incorporate these filesystem operations into their CI/CD pipelines using PowerShell and CLI, thereby enriching CI/CD pipeline

Share

02

Dec

SAP HANA backup using Azure Backup is now generally available

Today, we are sharing the general availability of Microsoft Azure Backup’s solution for SAP HANA databases in the UK South region.

Azure Backup is Azure’s native backup solution, which is BackInt certified by SAP. This offering aligns with Azure Backup’s mantra of zero-infrastructure backups, eliminating the need to deploy and manage backup infrastructure. You can now seamlessly backup and restore SAP HANA databases running on Microsoft Azure Virtual Machines (VM) — M series Virtual Machine is also supported, and leverage enterprise management capabilities that Azure Backup provides.

Benefits 15-minute Recovery Point Objective (RPO): Recovery of critical data of up to 15 minutes is possible. One-click, point-in-time restores: Easy to restore production data on SAP HANA databases to alternate servers. Chaining of backups and catalogs to perform restores is all managed by Azure behind the scenes. Long-term retention: For rigorous compliance and audit needs, you can retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability. Backup Management from Azure: Use Azure Backup’s management and monitoring capabilities for improved management experience.

Watch this space for more updates on GA rollout to other regions. We are currently

Share

26

Nov

Multi-protocol access on Data Lake Storage now generally available

We are excited to announce the general availability of multi-protocol access for Azure Data Lake Storage. Azure Data Lake Storage is a unique cloud storage solution for analytics that offers multi-protocol access to the same data. This is a no-compromise solution that allows both the Azure Blob Storage API and Azure Data Lake Storage API to access data on a single storage account. You can store all your different types of data in one place, which gives you the flexibility to make the best use of your data as your use case evolves. The general availability of multi-protocol access creates the foundation to enable object storage capabilities on Data Lake Storage. This brings together the best of both object storage and Hadoop Distributed File System (HDFS) to enable scenarios that were not possible until today without data copy.

Broader ecosystem of applications and features

Multi-protocol access provides a powerful foundation to enable integrations and features for Data Lake Storage. Existing object storage applications and connectors can now be used to access data stored in Data Lake Storage with no changes. This vastly accelerated the integration of Azure services and the partner ecosystem with Data Lake Storage. We are also

Share

21

Nov

https://azure.microsoft.com/blog/azure-backup-support-for-sql-server-2019-and-restore-as-files/As SQL Server 2019 continues to push the boundaries of availability, performance, and data intelligence, a centrally managed, enterprise-scale backup solution is imperative to ensure the protection of all that data. This is especially true if you are running the READ MORE

Share

21

Nov

Change feed support now available in preview for Azure Blob Storage

Change feed support for Microsoft Azure Blob storage is now available in preview. Change feed provides a guaranteed, ordered, durable, read-only log of all the creation, modification, and deletion change events that occur to the blobs in your storage account. This log is stored as append blobs within your storage account, therefore you can manage the data retention and access control based on your requirements.

Change feed is the ideal solution for bulk handling of large volumes of blob changes in your storage account, as opposed to periodically listing and manually comparing for changes. It enables cost-efficient recording and processing by providing programmatic access such that event-driven applications can simply consume the change feed log and process change events from the last checkpoint.

Some scenarios that would benefit from consuming a blob change feed include:

Bulk processing a group of newly uploaded files for virus scanning, resizing, or backups. Storing, auditing, and analyzing changes to your objects over any period of time for data management or compliance. Combining data uploaded by various IoT sensors into a single collection for data transformation and insights. Additional data movement by synchronizing with a cache, search engine, or data warehouse. How to get started

Share

19

Nov

Azure high-performace computing at SC’19
Azure high-performace computing at SC’19

HBv2 Virtual Machines for HPC, Azure’s most powerful yet, now in preview

Azure HB v2-series Virtual Machines (VM) for high-performance computing (HPC) are now in preview in the South Central US region.

HBv2-series Virtual Machines are Azure’s most advanced HPC offering yet, featuring performance and Message Passing Interface scalability rivaling the most advanced supercomputers on the planet, and price and performance on par with on-premises HPC deployments.

HBv2 Virtual Machines are designed for a variety of real-world HPC applications, from fluid dynamics to finite element analysis, molecular dynamics, seismic processing & imaging, weather modeling, rendering, computational chemistry, and more.

Each HBv2 Virtual Machines features 120 AMD EPYCTM 7742 processor cores at 2.45 GHz (3.3 GHz Boost), 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading. A HBv2 Virtual Machine also provides up to 340 GB per second of memory bandwidth, up to four teraflops of double-precision compute, and up to eight teraflops of single-precision compute.

Finally, a HBv2 Virtual Machine features 900 GB of low-latency, high-bandwidth block storage via NVMeDirect, and supports up to eight Azure Managed Disks.

200 Gigabits high data rate (HDR) InfiniBand comes to the Azure

HBv2-series Virtual Machines feature one of the

Share

19

Nov

Finastra “did not expect the low RPO” of Azure Site Recovery DR

Today’s question and answer style post comes after I had the chance to sit down with Bryan Heymann, Head of Cloud Architecture at Finastra, discussing his experience with Azure Site Recovery. Finastra builds and deploys technology on its open software architecture, our conversation focused on the organization’s journey to replace several disparate disaster recovery (DR) technologies with Azure Site Recovery. To learn more about achieving resilience in Azure, refer to this whitepaper.

 

  You have been on Azure for a few years now – before we get too deep in DR, can you start with some context on the cloud transformation that you are going through at Finastra?

We think of our cloud journey across three horizons. Currently, we’re at “Horizon 0” – consolidating and migrating our core data centers to the cloud with a focus on embracing the latest technologies and reducing total cost of ownership (TCO.) The workloads are a combination of production sites and internal employee sites.

Initially, we went through a 6-month review with a third party to identify our datacenter strategy, and decided to select a public cloud. Ultimately, we realized that Microsoft would be a solid partner to help us on our journey. We

Share

13

Nov

Save more on Azure usage—Announcing reservations for six more services

With reserved capacity, you get significant discounts over your on-demand costs by committing to long-term usage of a service. We are pleased to share reserved capacity offerings for the following additional services. With the addition of these services, we now support reservations for 16 services, giving you more options to save and get better cost predictability across more workloads.

Blob Storage (GPv2) and Azure Data Lake Storage (Gen2). Azure Database for MySQL. Azure Database for PostgreSQL. Azure Database for MariaDB. Azure Data Explorer. Premium SSD Managed Disks. Blob Storage (GPv2) and Azure Data Lake Storage (Gen2)

Save up to 38 percent on your Azure data storage costs by pre-purchasing reserved capacity for one or three years. Reserved capacity can be pre-purchased in increments of 100 TB and 1 PB sizes, and is available for hot, cool, and archive storage tiers for all applicable storage redundancies. You can also use the upfront or monthly payment option, depending on your cash flow requirements.

The reservation discount will automatically apply to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen2). Discounts are applied hourly on the total data stored in that hour. Unused reserved capacity doesn’t carry over.

Storage reservations

Share

29

Oct

https://azure.microsoft.com/blog/disaster-recovery-for-sap-hana-systems-on-azure/This blog will cover the design, technology, and recommendations for setting up disaster recovery (DR) for an enterprise customer, to achieve best in class recovery point objective (RPO) and recovery time objective (RTO) with an SAP S/4HANA landscape. This post READ MORE

Share

28

Oct

Customize networking for DR drills: Azure Site Recovery

One of the most important features of a disaster recovery tool is failover readiness. Administrators ensure this by watching out for health signals from the product. Some also choose to set up their own monitoring solutions to track readiness. End to end testing is conducted using disaster recovery (DR) drills every three to six months. Azure Site Recovery offers this capability for replicated items and customers rely heavily on test failovers or planned failovers to ensure that the applications work as expected. With Azure Site Recovery, customers are encouraged to use non-production network for test failover so that IP addresses and networking components are available in the target production network in case of an actual disaster. Even with non-production network, the drill should be the exact replica of the actual failover.

Until now, it has been close to being the replica. The networking configurations for test failover did not entirely match the failover settings. Choice of subnet, network security group, internal load balancer, and public IP address per network interfacing controller (NIC) could not be made. This means that customer had to ensure a particular alphabetical naming convention of subnets in test failover network to ensure the replicated items are

Share