In the next decade, nearly every consumer gadget, every household appliance, and every industrial device will be connected to the Internet. These connected devices will also become more intelligent with the ability to predict, talk, listen, and more. The companies who manufacture these devices will have an opportunity to reimagine everything and fundamentally transform their businesses with new product offerings, new customer experiences, and differentiate against competition with new business models.
All these everyday devices have in common a tiny chip, often smaller than the size of your thumbnail, called a microcontroller (MCU). The MCU functions as the brain of the device, hosting the compute, storage, memory, and an operating system right on the device. Over 9 billion of these MCU-powered devices are built and deployed every year. For perspective, that’s more devices shipping every single year than the world’s entire human population. While few of these devices are connected to the Internet today, within just a few years, this entire industry, all 9 billion or more devices per year, is on path to include connected MCUs.
Internet connectivity is a two-way street. With these devices becoming a gateway to our homes, workplaces, and sensitive data, they also become targets
The preview for long-term backup retention in Azure SQL Database was announced in October 2016, providing you with a way to easily manage long-term retention for your databases – up to 10 years – with backups stored in your own Azure Backup Service Vault.
Based upon feedback gathered during the preview, we are happy to announce a set of major enhancements to the long-term backup retention solution. With this update we have eliminated the need for you to deploy and manage a separate Backup Service Vault. Instead, SQL Database will utilize Azure Blob Storage under the covers to store and manage your long-term backups. This new design will enable flexibility for your backup strategy, and overall more control over costs.
This update brings you the following additional benefits:
More regional support – Long-term retention will be supported in all Azure regions and national clouds. More flexible backup policies – You can customize the frequency of long-term backups for each database with policies covering weekly, monthly, yearly, and specific week-within-a-year backups. Management of individual backups – You can delete backups that are not critical for compliance. Streamlined configuration – No need to provision a separate backup service vault. What happens with
We’re excited to announce the preview of an additional purchasing model to the Azure SQL Database Elastic Pool and Single Database deployment options. Recently announced with SQL Database Managed Instance, the vCore-based model reflects our commitment to customer choice by providing flexibility, control, and transparency. As with Managed Instance, the vCore-based model makes the Elastic Pool and Single Database options eligible for up to 30 percent savings* with the Azure Hybrid Benefit for SQL Server.
Optimize flexibility and performance with two new service tiers
The new vCore-based model introduces two service tiers, General Purpose and Business Critical. These tiers let you independently define and control compute and storage configurations, and optimize them to exactly what your application requires. If you’re considering a move to the cloud, the new model also provides a straightforward way to translate on-premises workload requirements to the cloud. General Purpose is designed for most business workloads and offers budget-oriented, balanced, and scalable compute and storage options. Business Critical is designed for business applications with high IO requirements and offers the highest resilience to failures.
Choosing between DTU and vCore-based performance levels
You want the freedom to choose what’s right for your workloads and we’re committed
Today customers rely on Azure’s application, infrastructure, and network monitoring capabilities to ensure their critical workloads are always up and running. It’s exciting to see the growth of these services and that customers are using multiple monitoring services to get visibility into issues and resolve them faster. To make it even easier to adopt Azure monitoring services, today we are announcing a new consistent purchasing experience across the monitoring services. Three key attributes of this new pricing model are:
1. Consistent pay-as-you-go pricing
We are adopting a simple “pay-as-you-go” model across the complete portfolio of monitoring services. You have full control and transparency, so you pay for only what you use.
2. Consistent per gigabyte (GB) metering for data ingestion
We are changing the pricing model for data ingestion from “per node” to “per GB”. Customers told us that the value in monitoring came from the amount of data received and the insight built on top of that, rather than the number of nodes. In addition, this new model works best for the future of containers and microservices where the definition of a node is less clear. “Per GB” data ingestion is the new basis for pricing across application, infrastructure,
Customers rely on the Azure IoT Hub service and love the scale, performance, security, and reliability it provides for connecting billions of IoT devices sending trillions of messages. Azure IoT Hub is already powering production IoT solutions across all major market segments including retail, healthcare, automotive, manufacturing, energy, agriculture, oil and gas, life sciences, smart buildings, and many others. Today, we have a few exciting announcements to make about Azure IoT Hub.
Over the years we’ve noticed that many customers start their IoT journey by simply sending data from devices to the cloud. We refer to this as “device to cloud telemetry,” and it provides a significant benefit. We’ve also noticed that later in their IoT journey most customers realize they need the ability to send commands out to devices, i.e., “cloud to device messaging,” as well as full device management capabilities, so they can manage the software, firmware, and configuration of their devices.
At Microsoft, we believe in meeting customers where they are and providing a great experience for them to capture the benefits of IoT. Because of this, we’re excited to announce a new capability of Azure IoT Hub: a “device to cloud telemetry” tier, called the
Two new Microsoft Azure regions in Australia are available to customers, making Microsoft the only global provider to deliver cloud services specifically designed to address the requirements of the Australian and New Zealand governments and critical national infrastructure, including banks, utilities, transport and telecommunications.
We build our cloud infrastructure to serve the needs of our customers by delivering innovation globally and listening locally. To support the mission-critical work of crucial organizations in Australia and New Zealand, we’re delivering our global cloud platform through a unique partnership with Canberra Data Centres.
A key to our new Australia Central regions is the ability for customers to deploy their own applications and infrastructure within Canberra Data Centres directly connected via Azure ExpressRoute to Microsoft’s global network. This offers a great deal of flexibility and surety on network performance and security, making these regions ideally suited to the complex challenge of modernising mission-critical applications over time. Australian federal government customers can leverage their Intra Government Communications Network (ICON) for direct connectivity.
With heightened scrutiny of supply chain assurance in government and critical national infrastructure, we are proud to deliver these services in partnership with Australian-owned Canberra Data Centres. They are the premier data centre
We introduced Azure Availability Zones during Microsoft Ignite as part of our continuing expansion of Azure’s support for the most demanding, mission-critical workloads. Today I’m excited to announce the general availability of Availability Zones beginning with select regions in the United States and Europe.
With Availability Zones, in addition to the broadest global coverage, Azure now offers the most comprehensive resiliency strategy in the industry from mitigating rack level failures with availability sets to protecting against large scale events with failover to separate regions. Within a region, Availability Zones increase fault tolerance with physically separated locations, each with independent power, network, and cooling.
For many companies, especially those in regulated industries, who are increasingly moving their mission-critical applications to the cloud, resiliency and business continuity have become a crucial focus. From online commerce systems in retail to customer-facing applications in financial services, the stakes are high for organizations and enterprises to deliver for their customers. Even a minor issue can have a major impact on a company’s brand reputation, customer satisfaction, and bottom line. In this environment, it’s imperative to develop applications with the highest operations standards anchored by a multi-layered resiliency approach.
“Availability Zones give us the combination of
We are excited to announce the general availability of Azure SQL Data Warehouse in three additional regions— Japan West, Australia East, and India West. These additional locations bring the product worldwide availability count to all 33 regions – more than any other major cloud data warehouse provider. With general availability, you can now provision SQL Data Warehouse across 33 regions with financially backed SLA of 99.9 per cent availability.
SQL Data Warehouse is a high-performance, secure, and compliant SQL analytics platform offering you a SQL-based view across data and a fast, fully managed, petabyte-scale cloud solution. It is elastic, enabling you to provision in minutes and scale up to 60 times larger in seconds. It comes standard with Geo-Backups, which enable geo-resiliency of your data and allows your data warehouse to be restored to any region in Azure in the case of a region-wide failure.
Azure regions provide multiple, physically separated and isolated availability zones connected through low latency, high throughput, and highly redundant networking. Starting today, customers can leverage these advanced features across 33 regions.
Begin today and experience the speed, scale, elasticity, security, and ease of use of a cloud-based data warehouse for yourself. You can see
Azure Load Balancer is a network load balancer offering high scalability, throughput and low latency across TCP and UDP load balancing.
Today, we are excited to announce the new Standard SKU of the Azure Load Balancer. The Standard SKU adds 10x scale, more features along with deeper diagnostic capabilities than the existing Basic SKU. The new offer is designed to handle millions of flows per second and built to scale and support even higher loads. Standard and the Basic Load Balancer options share APIs and will offer our customers several options to pick and choose what best match their needs.
Below are some of the important features of the new Standard SKU:
Vastly increased Scalability
Standard Load Balancer can distribute network traffic of up to one thousand (1000) VM instances in a backend pool. This is a 10x scale improvement over the existing Basic SKU. One or more large scale virtual machine Scale Sets can be configured behind a single highly available IP address and the health and availability of each instance is managed and monitored by health probes.
Versatility within the Vnet
The new Standard Load Balancer spans an entire virtual network (VNet). Any virtual machine in the
It has been an exciting last few months since we announced the public preview of Azure Files share snapshot as we see our customers experiencing out-of-the-box snapshot capabilities for their Azure file shares. Today, we are excited to announce the general availability of Azure Files share snapshots globally in all Azure clouds. Share snapshots provide a way to make incremental backups of Server Message Block (SMB) shares in Azure Files. Storage administrators can use snapshots directly and backup providers can now leverage this capability to integrate Azure Files backup and restore capabilities into their products.
Key value proposition
Incremental and fast – Only changes made to the base data are stored in the snapshot. If the data is available on the base share, it will not be duplicated in any snapshot. If nothing changes after you create the snapshot, the size of the snapshot remains zero. This is true even for the very first snapshot, which means that it never duplicates any data. This makes snapshots time, space, and cost efficient. This also minimizes the time required to create the snapshot. One can create a snapshot of the share instantaneously. While snapshot can be taken at share level, you can