Whether you’re a new student, thriving startup, or the largest enterprise, you have financial constraints, and you need to know what you’re spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.
We’re always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:
Simplify financial reporting with cost allocation, now in preview. Connector for AWS is now generally available. Get pay-as-you-go rates for all Azure products and services. What’s new in Cost Management Labs. Expanded availability of resource tags in cost reporting. 15 ways to optimize your Azure costs. New ways to save money with Azure. Upcoming changes to Azure usage data. Documentation updates.
Let’s dig into the details.
Simplify financial reporting with cost allocation, now in preview
As businesses continue to adapt to the realities of the current environment, operational resilience has never been more important. As a result, a growing number of customers have accelerated a move to the cloud, using Microsoft Azure NetApp Files to power critical pieces of their IT infrastructure, like Virtual Desktop Infrastructure, SAP applications, and mission-critical databases.
Today, we release the preview of Azure NetApp Files cross region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way, protecting your data from unforeseeable regional failures. We’re also introducing important new enhancements to Azure NetApp Files to provide you with more data security, operational agility, and cost-saving flexibility.
Azure NetApp Files cross region replication
Azure NetApp Files cross region replication leverages NetApp SnapMirror® technology therefore, only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time so you can achieve a smaller Restore Point Objective (RPO).
Over the next few months of Azure NetApp Files cross region replication preview
Server Message Block (SMB) 3.0 introduced SMB Multichannel technology for Windows Server 2012 and Windows 8 client. This feature allows SMB 3.x clients to establish multiple network connections to the SMB server 3.0 for greater performance over multiple network adapters and/or by taking advantage of NIC Receive Side Scaling (RSS). Today, we are announcing the preview of Azure Files SMB Multichannel on premium tier. With this release, Azure Files clients can now take advantage of this technology with premium file shares in the cloud.
Benefits of Azure Files SMB Multichannel
SMB Multichannel allows multiple connections over the best network path that allows for increased performance due to parallel processing. The increased performance is achieved by aggregation over multiple NICs and/or with NIC support for RSS that allows distributed input/outputs (IOs) across multiple CPUs and dynamically load balancing.
Benefits of Azure Files SMB Multichannel include:
Higher throughput: Makes this feature suitable for applications with large files with large IOs, such as media and entertainment, for content creation and transcoding, genomics, and financial services risk analysis. Increased input/output operations per second (IOPS): Increased IOPS is especially useful for small IOs scenarios like in database applications. Network
Achieving cost efficiency in your cloud usage is more critical today than ever before.
At Azure Backup, we’re committed to helping you optimize your backup costs. Over the last few months, we’ve introduced a comprehensive collection of features that not only gives you more visibility into your backup usage, but also helps you take action to achieve significant cost savings.
To help you get started with this journey, below are five steps you can take to optimize your backup costs, without needing to compromise on the safety of your data.
Clean up backups for your deleted resources
If you are backing up resources that do not exist anymore, verify if you still need to retain the backups for these resources. Deleting unnecessary backups can help you save on your backup costs.
You can use the Optimize tab in our Backup Reports solution to gain visibility into all inactive resources, across all types of workloads being backed up. Once you have identified an inactive resource, you can investigate the issue further by navigating to the Azure resource blade for that resource. If you discover that the resource doesn’t exist anymore, you can choose to stop protection and delete backup data for
Many enterprise and organizations are moving their data to Microsoft Azure Blob storage for its massive scale, security capabilities, and low total cost of ownership. At the same time, they continue running many apps on different storage systems using the Network File System (NFS) protocol. Companies that use different storage systems due to protocol requirements are challenged by data silos where data resides in different places and requires additional migration or app rewrite steps.
To help break down these silos and enable customers to run NFS-based applications at scale, we are announcing the preview of NFS 3.0 protocol support for Azure Blob storage. Azure Blob storage is the only storage platform that supports NFS 3.0 protocol over object storage natively (no gateway or data copying required), with object storage economics, which is essential for our customers.
One of our Media and Entertainment (M&E) customers said, “NFS access to blob storage will enable our customers to preserve their legacy data access methods when migrating the underlying storage to Azure Blob storage.” Other customers have requested NFS for blob storage so they can reuse the same code from an on-premises solution to access files while controlling the overall cost of the solution. Financial
“When I first kicked off this Advancing Reliability blog series in my post last July, I highlighted several initiatives underway to keep improving platform availability, as part of our commitment to provide a trusted set of cloud services. One area I mentioned was fault injection, through which we’re increasingly validating that systems will perform as designed in the face of failures. Today I’ve asked our Principal Program Manager in this space, Chris Ashton, to shed some light on these broader ‘chaos engineering’ concepts, and to outline Azure examples of how we’re already applying these, together with stress testing and synthetic workloads, to improve application and service resilience.” – Mark Russinovich, CTO, Azure
Developing large-scale, distributed applications has never been easier, but there is a catch. Yes, infrastructure is provided in minutes thanks to your public cloud, there are many language options to choose from, swaths of open source code available to leverage, and abundant components and services in the marketplace to build upon. Yes, there are good reference guides that help give a leg up on your solution architecture and design, such as the Azure Well-Architected Framework and other resources in the Azure Architecture Center. But while application development
Organizations are changing how they run their businesses and many are looking to accelerate their move to the cloud to take advantage of the benefits that the cloud offers, including lower total cost of ownership (TCO) and improved flexibility and security, without sacrificing on performance, application compatibility, and availability. We are committed to delivering new innovations to help our customers easily migrate their business-critical applications to Azure.
Today, we are announcing the general availability of shared disks on Azure Disk Storage—enabling you to more easily migrate your existing on-premises Windows and Linux-based clustered environments to Azure. We are also announcing important new enhancements for Azure Disk Storage to provide you with more availability, security, and flexibility on Azure.
Azure shared disks general availability
With shared disks, Azure Disk Storage is the only shared block storage in the cloud that supports both Windows and Linux-based clustered or high-availability applications. This unique offering allows a single disk to be simultaneously attached and used from multiple virtual machines (VMs), enabling you to run your most demanding enterprise applications in the cloud, such as clustered databases, parallel file systems, persistent containers, and machine learning applications, without compromising on well-known deployment patterns for fast failover
Azure Blob storage is a massively scalable object storage solution that serves from small amounts to hundreds of petabytes of data per customer across a diverse set of data types, including logging, documents, media, genomics, seismic processing, and more. Read the Introduction to Azure Blob storage to learn more about how it can be used in a wide variety of scenarios.
Increasing file size support for Blob storage
Customers that have workloads on-premises today utilize files that are limited by the filesystem used with file size maximums up to exabytes in size. Most usage would not go up to the filesystem limit but do scale up to the tens of terabytes in size for specific workloads that make use of large files. We recently announced the preview of our new maximum blob size of 200 TB (specifically 209.7 TB), increasing our current limit of 5TB in size, which is a 40x increase! The increased size of over 200TB per object is much larger than other vendors that provide a 5TB max object size. This increase allows workloads that currently require multi-TB size files to be moved to Azure without additional work to break up these large objects.
This increase in
A top of mind concern among our customers is keeping their applications and data workloads running and recoverable in the case of unforeseen events or disasters. For example, COVID-19 has presented daunting challenges for IT, which are only compounded by growing threats from ransomware or setbacks related to technical or operational failure. These considerations further highlight the importance of a plan to ensure business continuity. IT admins are looking to cloud-based backup and disaster recovery solutions as part of their business continuity strategy because of the ability to quickly onboard, scale based on storage needs, remotely manage, and save costs by avoiding additional on-premises investments.
Azure provides native cloud solutions for customers to implement simple, secure and cost-effective business continuity and disaster recovery (BCDR) strategies for their applications and data whether they are on-premises or on Azure. Once enabled, customers benefit from minimal maintenance and monitoring overhead, remote management capabilities, enhanced security, and the ability to immutably recover services in a timely and orchestrated manner. Customers can also use their preferred backup and disaster recovery providers from a range of our partner solutions to extend their on-premises BCDR solutions to Azure.
All of this is possible without the need to
Selecting the right combination of virtual machines (VMs) and disks is extremely important as the wrong mix can impact your application’s performance. One way to choose which VMs and disks to use is based on your disk performance pattern, but it’s not always easy. For example, a common scenario is unexpected or cyclical disk traffic where the peak disk performance is temporary and significantly higher than the baseline performance pattern. We frequently get asked by our customers, “should I provision my VM for baseline or peak performance?” Over-provisioning can lead to higher costs, while under-provisioning can result in poor application performance and customer dissatisfaction. Azure Disk Storage now makes it easier for you to decide, and we’re pleased to share VM bursting support on your Azure virtual machines.
Get short-term, higher performance with no additional steps or costs
VM bursting, which is enabled by default, offers you the ability to achieve higher throughput for a short duration on your virtual machine instance with no additional steps or cost. Currently available on all Lsv2-series VMs in all supported regions, VM bursting is great for a wide range of scenarios like handling unforeseen spiky disk traffic smoothly, or processing batched jobs with