Today, Tad Brockway, Corporate Vice President, Microsoft Azure, announced the general availability of Azure Ultra Disk Storage, an Azure Managed Disks offering that provides massive throughput with sub-millisecond latency for your most I/O demanding workloads. With the introduction of Ultra Disk Storage, Azure includes four types of persistent disk—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives you price and performance options tailored to meet the requirements of every workload. Ultra Disk Storage delivers consistent performance and low latency for I/O intensive workloads like SAP Hana, OLTP databases, NoSQL, and other transaction-heavy workloads. Further, you can reach maximum virtual machine (VM) I/O limits with a single Ultra disk, without having to stripe multiple disks.
Durability of data is essential to business-critical enterprise workloads. To ensure we keep our durability promise, we built Ultra Disk Storage on our existing locally redundant storage (LRS) technology, which stores three copies of data within the same availability zone. Any application that writes to storage will receive an acknowledgement only after it has been durably replicated to our LRS system.
Below is a clip from a presentation I delivered at Microsoft Ignite demonstrating the leading performance of Ultra Disk Storage:
Today, we are announcing the general availability (GA) of Microsoft Azure Ultra Disk Storage—a new Managed Disks offering that delivers unprecedented and extremely scalable performance with sub-millisecond latency for the most demanding Azure Virtual Machines and container workloads. With Ultra Disk Storage, customers are now able to lift-and-shift mission critical enterprise applications to the cloud including applications like SAP HANA, top tier SQL databases such as SQL Server, Oracle DB, MySQL, and PostgreSQL, as well as NoSQL databases such as MongoDB and Cassandra.With the introduction of Ultra Disk Storage, Azure now offers four types of persistent disks—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives our customers a comprehensive set of disk offerings for every workload.
Ultra Disk Storage is designed to provide customers with extreme flexibility when choosing the right performance characteristics for their workloads. Customers can now have granular control on the size, IOPS, and bandwidth of Ultra Disk Storage to meet their specific performance requirements. Organizations can achieve the maximum I/O limit of a virtual machine (VM) with Ultra Disk Storage without having to stripe multiple disks. Check out the blog post “Azure Ultra Disk Storage: Microsoft’s service for your most I/O demanding
As part of our commitment to provide the most cost-effective storage offering, we’re excited to share that we have dropped Azure Archive Storage prices by up to 50 percent in some regions. The new pricing is effective immediately.
In 2017 we launched Azure Archive Storage to provide cloud storage for rarely accessed data with flexible latency requirements at an industry leading price point. Since then we’ve seen both small and large customers from all industries utilize Archive Storage to significantly reduce their storage bill, improve data durability, and meet legal compliance. Forrester Consulting interviewed four of these customers and conducted a commissioned Total Economic Impact™ (TEI) study to evaluate the value customers achieved by moving both on-premises and existing data in the cloud to Archive Storage. Below are some of the highlights from that study.
112 percent return-on-investment (ROI). Forrester’s interviews with four existing customers and subsequent financial analysis found that a composite organization based on these interviewed organizations projects expected benefits of $296,941 over projected three years versus costs of $140,376, adding up to a net present value (NPV) of $156,565 and an ROI of 112 percent. Reduced or eliminated more than $173 thousand in operational and hardware expenses
Azure Virtual Network enables a flexible foundation for building advanced networking architectures. Managing heterogeneous environments with various types of filtering components, such as Azure Firewall or your favorite network virtual appliance (NVA), requires a little bit of planning.
Azure Bastion, which is currently in preview, is a fully managed platform as a service (PaaS) that provides secure and seamless remote desktop protocol (RDP) and secure shell (SSH) access to your virtual machines (VMs) directly through the Azure portal. Azure Bastion is provisioned directly in your virtual network, supporting all VMs attached without any exposure through public IP addresses.
When you deploy Azure Firewall, or any NVA, you invariably force tunnel all traffic from your subnets. Applying a 0.0.0.0/0 user-defined route can lead to asymmetric routing for ingress and egress traffic to your workloads in your virtual network.
While not trivial, you often find yourself creating and managing a growing set of network rules, including DS NAT, forwarding, and so on, for all your applications to resolve this. Although this can impact all your applications, RDP and SSH are the most common examples. In this scenario, the ingress traffic from the Internet may come directly to your virtual machine within your
Cloud data lakes solve a foundational problem for big data analytics—providing secure, scalable storage for data that traditionally lives in separate data silos. Data lakes were designed from the start to break down data barriers and jump start big data analytics efforts. However, a final “silo busting” frontier remained, enabling multiple data access methods for all data—structured, semi-structured, and unstructured—that lives in the data lake.
Providing multiple data access points to shared data sets allow tools and data applications to interact with the data in their most natural way. Additionally, this allows your data lake to benefit from the tools and frameworks built for a wide variety of ecosystems. For example, you may ingest your data via an object storage API, process the data using the Hadoop Distributed File System (HDFS) API, and then ingest the transformed data using an object storage API into a data warehouse.
Single storage solution for every scenario
We are very excited to announce the preview of multi-protocol access for Azure Data Lake Storage! Azure Data Lake Storage is a unique cloud storage solution for analytics that offers multi-protocol access to the same data. Multi-protocol access to the same data, via Azure Blob storage API
Azure Stream Analytics is a fully managed PaaS offering that enables real-time analytics and complex event processing on fast moving data streams. Thanks to zero-code integration with over 15 Azure services, developers and data engineers can easily build complex pipelines for hot-path analytics within a few minutes. Today, at Inspire, we are announcing various new innovations in Stream Analytics that help further reduce time to value for solutions that are powered by real-time insights. These are as follows:
Bringing the power of real-time insights to Azure Event Hubs customers
Today, we are announcing one-click integration with Event Hubs. Available as a public preview feature, this allows an Event Hubs customer to visualize incoming data and start to write a Stream Analytics query with one click from the Event Hub portal. Once the query is ready, they will be able to operationalize it in few clicks and start deriving real time insights. This will significantly reduce the time and cost to develop real-time analytics solutions.
One-click integration between Event Hubs and Azure Stream Analytics
Augmenting streaming data with SQL reference data support
Reference data is a static or slow changing dataset used to augment real-time data streams to deliver more
Reliance on cloud services continues to grow for industries, organizations, and people around the world. So now more than ever it is important that you can trust that the cloud solutions you rely on are secure, compliant with global standards and local regulations, keep data private and protected, and are fundamentally reliable. At Microsoft, we are committed to providing a trusted set of cloud services, giving you the confidence to unlock the potential of the cloud.
Over the past 12 months, Azure has operated core compute services at 99.995 percent average uptime across our global cloud infrastructure. However, at the scale Azure operates, we recognize that uptime alone does not tell the full story. We experienced three unique and significant incidents that impacted customers during this time period, a datacenter outage in the South Central US region in September 2018, Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) challenges in November 2018, and DNS maintenance issues in May 2019.
Building and operating a global cloud infrastructure of 54 regions made up of hundreds of evolving services is a large and complex task, so we treat each incident as an important learning moment. Outages and other service incidents are a challenge
In this blog post, we will take a closer look at pricing for Azure Premium Blob Storage, and its potential to reduce overall storage costs for some applications.
Premium Blob Storage is Azure Blob Storage powered by solid-state drives (SSDs) for block blobs and append blobs. For more information see, “Azure Premium Blob Storage is now generally available.” It is ideal for workloads that require very fast storage response times and/or has a high rate of operations. For more details on performance see, “Premium Block Blob Storage – a new level of performance.”
Azure Premium Blob Storage utilizes the same ‘pay-as-you-go’ pricing model used by standard general-purpose V2 (GPv2) hot, cool, and archive. This means customers only pay for the volume of data stored per month and the quantity of operations performed.
The current blob pricing can be found on the Azure Storage pricing page. You will see, data storage gigabyte (GB) pricing decreases for colder tiers, while the inverse is true for operation prices where operations per 10,000 pricing decreases for hotter tiers. Premium data storage pricing is higher than hot data storage pricing. However, read and write operations pricing for premium are lower than hot read and write
Our customers continue to use the Azure Data Box family to move massive amounts of data into Azure. One of the regular requests that we receive is for a larger capacity option that retains the simplicity, security, and speed of the original Data Box. Last year at Ignite, we announced a new addition to the Data Box family that did just that – a preview of the petabyte-scale Data Box Heavy
With thanks to those customers who provided feedback during the preview phase, I’m excited to announce that Azure Data Box Heavy has reached general availability in the US and EU!
How Data Box Heavy works
In many ways, Data Box Heavy is just like the original Data Box. You can order Data Box Heavy directly from the Azure portal, and copy data to Data Box Heavy using standard files or object protocols. Data is automatically secured on the appliance using AES 256-bit encryption. After your data is transferred to Azure, the appliance is wiped clean according to National Institute of Standards and Technology (NIST) standards.
But Data Box Heavy is also designed for a much larger scale than the original Data Box. Data Box Heavy’s one petabyte of raw