Today, we are pleased to announce the release of new TSI developer tools, including an Azure Resource Manager (ARM) template, API code samples, and easy-to-follow documentation for developers. TSI’s developer tools will shorten the time it takes to get started developing. Using these developer tools, customers can more easily embed TSI’s platform into custom applications to power charts/graphs, compare data from different points in time, and dynamically explore trends and correlations in their data.
As organizations transition their go-to-market and business models from selling devices to selling services, they are developing companion applications that provide operational insights and analytics to their customers. Much of the data required to power these applications is time series, but large volumes of time series data can be very challenging to store and query. Time Series Insights (TSI) takes the burden of time series data management away from these organizations, and TSI’s platform capabilities enable developers to build applications that provide valuable insights to their customers.
Why time series data is difficult to embed in applications today
Time series data at IoT-scale can lead to high latency and long rendering times when querying traditional databases. Many customers have told us that it’s easy to hang
Azure Backup stands firm on the promise of simplicity, security, and reliability by giving customers a smooth and dependable experience across scenarios. Continuing on the enterprise data-protection promise, today, we are excited to announce the support for backup and restore of Azure virtual machines encrypted using Bitlocker Encryption Key(BEK) for managed or unmanaged disks. This announcement augments the existing capability to backup VMs encrypted using Bitlocker Encryption Key(BEK) and Key Encryption Key(KEK). This support is available using Portal and PowerShell.
With this release, Azure Backup provides:
Backup of VMs encrypted using BEK-only as well as BEK and KEK both: Azure Backup now supports backup of VMs encrypted using BEK along with the already supported scenario of BEK and KEK both. The BEK(secrets) and KEK(keys) backed up are encrypted so they can be read and used only when restored back to key vault by the authorized users. Backup of both managed and unmanaged disks in encrypted VMs: Application-consistent backup for both managed and unmanaged disks is supported now which gives user the freedom to create any kind of encrypted VM and then back it up using Azure Backup. Value proposition
This feature provides:
Simplified experience: With this release,
We keep enriching the breadth of connectivity in Azure Data Factory to enable customers to ingest data from various data sources into Azure when building modern data warehouse solutions or data-driven SaaS applications. Today, we are excited to announce that Azure Data Factory newly enabled copying data from the following data stores using Copy Activity in V2. You can always find the full supported connector list from supported data stores, and click into each connector topic there to learn more details.
Amazon Marketplace Web Service (Beta) Azure Database for PostgreSQL Concur (Beta) Couchbase (Beta) Drill (Beta) Google BigQuery (Beta) Greenplum (Beta) HBase Hive HubSpot (Beta) Impala (Beta) Jira (Beta) Magento (Beta) MariaDB Marketo (Beta) Oracle Eloqua (Beta) Paypal (Beta) Phoenix Presto (Beta) QuickBooks (Beta) SAP Cloud for Customer (C4C) ServiceNow (Beta) Shopify (Beta) Spark Square (Beta) Xero (Beta) Zoho (Beta)
If you are using PowerShell or .NET/Python SDK to author, make sure you upgrade to the December version to use these new features. And for hybrid copy scenario, note these connectors are supported since Self-hosted Integration Runtime version 3.2.
You are invited to give them a try and provide us feedback. We hope you find them helpful in your scenario.
Azure customers manage budgets at a workload level and need granular controls on monitoring the spend on cloud services. Workloads sometimes share a subscription and yet need to be budgeted for individually. As a first step in making these granular controls available we are previewing an ARM API to set and manage a budget at the subscription scope. Current EA customers have a similar capability in the EA portal, at the department level. This release is the first step towards making the same set of capabilities work for you across the hierarchy of your management plane.
The budgets API enables you to setup a budget for a subscription, and also setup multiple notification thresholds. To illustrate, you might have a subscription where you setup a budget of $1,000 and setup notifications at 25%, 50%, 75%, and 100%. These notifications would be triggered when your usage costs exceed $250, $500, $750, and $1,000 respectively.
Subscription budget API
The API documentation provides detailed guidance on the operations supported and the payloads. The API supports multiple budgets to be created for a subscription over the duration of the budget. At the end of the duration the budget resets and starts over. For this
Today we are pleased to announce the release of a new Azure Financial Services Regulated Workloads Blueprint.
The Azure Security and Compliance Blueprint Program provides automated solutions and guidance for rapid deployment of Azure services that meet specific regulatory requirements from weeks to a few hours. The new Financial Services Regulated Workloads Blueprint gives you an automated solution that will help guide you in storing and managing sensitive financial information such as payment data in Azure. The Financial Services Blueprint is designed to help customers meet compliance requirements outlined in the American Institute of Certified Public Accountants (AICPA) SOC 1 and SOC 2 standards, the Payment Card Industry Data Security Standard (PCI DSS) version 3.2, as well as the Federal Financial Institutions Examination Council (FFIEC), and Gramm-Leach-Bliley Act (GLBA).
Using the Financial Services Blueprint, you can deploy and securely configure an Azure SQL Database, a web application protected by security services such as Azure App Service Environment (ASE), the Web Application Firewall (WAF), and Azure Security Center (ASC). Automated templates and reference architectures are provided to help you implement the technical controls required to achieve a trusted and more secure end to end compliant deployment.
The Financial Services
Azure Cosmos DB is Microsoft’s globally distributed, horizontally partitioned, multi-model database service. The service is designed to allow customers to elastically and independently scale throughput and storage across any number of geographical regions. Azure Cosmos DB offers guaranteed low latency at the 99th percentile, 99.999% high availability, predictable throughput, and multiple well-defined consistency models. Azure Cosmos DB is the first and only globally distributed database service in the industry today to offer comprehensive Service Level Agreements (SLAs) encompassing all four dimensions of global distributions which our customers care the most: throughput, latency at the 99th percentile, availability, and consistency. As a cloud service, we have carefully designed and engineered Azure Cosmos DB with multi-tenancy, horizontal scalability and global distribution in mind.
We have just rolled out a few long-awaited changes and we wanted to share them with you:
Entry point for unlimited collections/containers is now 60% cheaper. In February, we’ve lowered entry point for unlimited containers making them 75% cheaper. We continue making improvements in our service and today we are pleased to announce that unlimited containers have now an entry point that is 60% cheaper than before. Instead of provisioning 2,500 RU/sec as a minimum, you can now
As an update to the Reporting APIs for Enterprise customers we are releasing an updated usage details API. This is a first step in the consolidation of Azure cost and usage based APIs in the ARM (Azure Resource Manager) model. The updated usage details API will support:
Migrating from a key based authorization model to ARM based authentication. The benefits of this authorization mode are an improved security posture and the ability to utilize ARM RBAC for authorization. Adding support for Web Direct subscriptions, with a few exceptions documented below. The ability to use filters and expand usage details. Call the API for either a subscription scope, or a subscription and billing period scope. All calls for a subscription will return data for the current billing period. Filter criteria will support dates, resource groups, resources, and instances. Additional details on the filters are available in the Swagger.
For Enterprise customers, reporting at a grain higher than the subscription is a work in progress and until released, you will need to continue to use the existing API. The consumption ARM API is the area we continue to invest in for cost related APIs with the goals of normalizing our APIs across
I am excited to announce the general availability (GA) of the Azure Site Recovery Deployment Planner for VMware and Hyper-V. This tool helps VMware and Hyper-V enterprise customers to understand their on-premises networking requirements, Microsoft Azure compute and storage requirements for successful Azure Site Recovery replication, and test failover or failover of their applications.
Apart from understanding infrastructure requirements, our customers also needed a way to estimate the total disaster recovery (DR) cost to Azure. In this GA release, we have added detailed estimated DR cost to Azure for your environment. You can generate a report with the latest Azure prices based on your subscription, the offer that is associated with your subscription, and the target Azure region for the specified currency. The Deployment Planner report gives you cost for compute, storage, network, and Azure Site Recovery licenses.
Key features of the tool The Deployment Planner can be run without having to install any Azure Site Recovery components to your on-premises environment. The tool does not impact the performance of production servers, as no direct connection is made to them. All performance data is collected from the Hyper-V server or VMware vCenter Server/VMware vSphere ESXi Server, which hosts the production
With Azure Monitor’s diagnostic settings you can set up your resource-level diagnostic logs and metrics to be streamed to any of three destinations including a storage account, an Event Hubs namespace, or Log Analytics. Sending to an Event Hubs namespace is a convenient way to stream Azure logs from any source into a custom logging solution, 3rd party SIEM product, or other logging tool.
Previously, you could only route your resource diagnostic logs to an Event Hubs namespace, in which an event hub was created for each category of data sent. Now, you can optionally specify which event hub within the namespace should be used for a particular diagnostic setting. This is helpful if you are routing multiple types of logs to a single endpoint, for example, a SIEM connector. Rather than having to configure that endpoint to read from multiple event hubs, you can simply route all log types to a single event hub and have your endpoint listen to that one source.
You can try this out today in the Azure Portal by creating or modifying a diagnostic setting and selecting “Stream to an event hub”.
This can also be set up using a Resource Manager template.
I am pleased to announce the renewal of the Singapore Multi-Tier Could Security (MTCS) Certification Level 3. As part of its commitment to customer satisfaction, Azure has adopted the MTCS standard to meet different cloud user needs for data sensitivity and business criticality. Azure has maintained its MTCS certification for the fourth consecutive year. This year, the scope has increased by 30% catching up with the latest ISO 27001 scope covering the latest data storage and analytics services including Data Lake Store, Data Lake Analytics, SQL Server Stretch Database, Azure Cosmos DB, Azure Container Service, etc.
Developed by the Infocomm Media Development Authority (IMDA) of Singapore, the MTCS Standard 584:2015 is the world’s first cloud security standard that covers three different tiers of security requirements spanning different service types including PaaS, IaaS and SaaS. The standard comprises a total of 535 controls closely mapped to ISO 27001 Information Security Management System (ISMS) standard, covering basic security in Level 1, more stringent governance and tenancy controls in Level 2, and reliability and resiliency for high-impact information systems in Level 3.
The MTCS standard seeks to drive cloud adoption across industries by giving clarity around the security service levels of Cloud Service