Category Archives : Updates

09

Apr

Continuous integration and deployment using Data Factory

Azure Data Factory (ADF) visual tools public preview was announced on Jan 16, 2018. With visual tools, you can iteratively build, debug, deploy, operationalize and monitor your big data pipelines. Now, you can follow industry leading best practices to do continuous integration and deployment for your ETL/ELT (extract, transform/load, load/transform) workflows to multiple environments (Dev, Test, PROD etc.). Essentially, you can incorporate the practice of testing for your codebase changes and push the tested changes to a Test or Prod environment automatically.

ADF visual interface now allows you to export any data factory as an ARM (Azure Resource Manager) template. You can click the ‘Export ARM template’ to export the template corresponding to a factory.

This will generate 2 files:

Template file: Template json containing all the data factory metadata (pipelines, datasets etc.) corresponding to your data factory. Configuration file: Contains environment parameters that will be different for each environment (Dev, Test, Prod etc.) like Storage connection, Azure Databricks cluster connection etc..

You will create a separate data factory per environment. You will then use the same template file for each environment and have one configuration file per environment. Clicking the ‘Import ARM Template’ button will take you to

Share

04

Apr

Improvements to SQL Elastic Pool configuration experience

We have made some great improvements to the SQL elastic pool configuration experience in the Azure portal. These changes are released alongside the new vCore-based purchasing model for elastic pools and single databases. Our goal is to simplify your experience configuring elastic pools and ensure you are confident in your configuration choices.

Changing service tiers for existing pools

Existing elastic pools can now be scaled up and down between service tiers. You can easily move between service tiers and discover the one that best fits your business needs. You can also switch between the DTU-based and the new vCore-based service tiers. You can also scale down your pool outside of business hours to save cost.

Simplifying configuration of the pool and its databases

Elastic pools offer many settings for customers to customize. The new experience aims to separate and simplify each aspect of pool management, between the pool settings, database settings, and database management. This enables you to more easily reason over each of these aspects of the pool while being able to save all settings changes in one batch.

Understanding your bill with new cost summary

Our new cost summary experience for elastic pools and single databases

Share

03

Apr

Introducing a new way to purchase Azure monitoring services

Today customers rely on Azure’s application, infrastructure, and network monitoring capabilities to ensure their critical workloads are always up and running. It’s exciting to see the growth of these services and that customers are using multiple monitoring services to get visibility into issues and resolve them faster. To make it even easier to adopt Azure monitoring services, today we are announcing a new consistent purchasing experience across the monitoring services. Three key attributes of this new pricing model are:

1. Consistent pay-as-you-go pricing

We are adopting a simple “pay-as-you-go” model across the complete portfolio of monitoring services. You have full control and transparency, so you pay for only what you use. 

2. Consistent per gigabyte (GB) metering for data ingestion

We are changing the pricing model for data ingestion from “per node” to “per GB”. Customers told us that the value in monitoring came from the amount of data received and the insight built on top of that, rather than the number of nodes. In addition, this new model works best for the future of containers and microservices where the definition of a node is less clear. “Per GB” data ingestion is the new basis for pricing across application, infrastructure,

Share

02

Apr

Ingest, prepare, and transform using Azure Databricks and Data Factory

Today’s business managers depend heavily on reliable data integration systems that run complex ETL/ELT workflows (extract, transform/load and load/transform data). These workflows allow businesses to ingest data in various forms and shapes from different on-prem/cloud data sources; transform/shape the data and gain actionable insights into data to make important business decisions.

With the general availability of Azure Databricks comes support for doing ETL/ELT with Azure Data Factory. This integration allows you to operationalize ETL/ELT workflows (including analytics workloads in Azure Databricks) using data factory pipelines that do the following:

Ingest data at scale using 70+ on-prem/cloud data sources Prepare and transform (clean, sort, merge, join, etc.) the ingested data in Azure Databricks as a Notebook activity step in data factory pipelines Monitor and manage your E2E workflow

Take a look at a sample data factory pipeline where we are ingesting data from Amazon S3 to Azure Blob, processing the ingested data using a Notebook running in Azure Databricks and moving the processed data in Azure SQL Datawarehouse.

You can parameterize the entire workflow (folder name, file name, etc.) using rich expression support and operationalize by defining a trigger in data factory.

Get started today!

We are excited for you

Share

26

Mar

Support for tags in cost management APIs is now available

The Cloud Cost Management (CCM) APIs provides a rich set of APIs to get detailed reporting on your Azure usage and charges. We continue to make the APIs more relevant and with the growing adoption of tags in Azure, we’re announcing the support for tags in both the Usage Details API and the Budgets API. Our support for tags will continue to improve across all the APIs where a group or filter by tags is applicable. This release only supports tags for subscriptions in Enterprise Agreements (EA), in future releases we plan to support other subscription types as well.

Tags in usage details

The usage details API today supports filters for the following dimensions, date range, resource groups and instances. With the most recent release we will now support tags as well. The support for tags is not retroactive and will only apply to usage reported after the tag was applied to the resource. Tag based filtering and aggregation is supported by the $filter and $apply parameters respectively. We will continue to add additional dimensions that can be used to filter and aggregate costs over time.

Tags in budgets

Budgets can be created at the subscription or a resource group

Share

20

Mar

Azure Redis Cache feature updates
Azure Redis Cache feature updates

We are pleased to announce that firewall and reboot functions are now supported in all three Azure Redis Cache tiers. We have been making these previously premium-only features available to the basic and standard tiers at no additional cost. In addition, we are previewing the ability to pin your Redis instance to specific Availability Zone-enabled Azure regions.

Firewall

Firewall provides added security for your Azure Redis deployment. It lets you restrict which clients can connect to your Redis cache based on their IP addresses. You can create a firewall rule for each IP address range that your Redis clients use. Once you enable firewall, by specifying at least one rule only those requests coming from IP addresses that fall into the defined IP range(s) will be accepted by Redis. Redis monitoring endpoints are excluded from firewall rules, however. This prevents accidental network disconnect due to firewall settings and ensures that monitoring will work uninterrupted.

Reboot

Reboot allows you to restart one or more nodes in your Redis Cache. This function is useful particularly for simulating cache failures and testing how your application would react to them. It is a highly requested feature from User Voice. You can reboot any

Share

20

Mar

7 month retirement notice: Access Control Service
7 month retirement notice: Access Control Service

Access Control Service, otherwise known as ACS, is officially being retired. ACS will remain available for existing customers until November 7, 2018. After this date, ACS will be shut down, causing all requests to the service to fail.

This blog post is a follow up to our original blog post announcing ACS retirement.

Classic Azure Portal retired April 2018

As of April 2nd 2018, the classic Azure Portal located at https://manage.windowsazure.com will be completely retired, and all requests will be redirected to the new Azure Portal at https://portal.azure.com. ACS namespaces will not be listed in the new Azure Portal whatsoever.  If you need to create, delete, enable, or disable an ACS namespace going forward, please contact Azure support. Starting from May 1 you will not be able to create new ACS namespaces.

You can still manage existing namespace configurations by visiting the ACS management portal directly, located at https://{your-namespace}.accesscontrol.windows.net. This portal allows you to manage service identities, relying parties, identity providers, claims rules, and more. It will be available until November 7, 2018.

Who is affected by this change?

This announcement affects any customer who has created one or more ACS namespaces in their Azure subscriptions. If your apps

Share

14

Mar

New offers in Azure Marketplace – February 2018

We continue to expand the Azure Marketplace ecosystem. In February 2018, 81 new offers successfully met the onboarding criteria and went live.

See details of the new offers below:

Sensitive Data Discovery and De-Id Tool (SDDT): SDDT simplifies and automates organization’s compliance with the GLBA, HIPAA, PCI, GDPR.

Actian Vector Analytic Database Community Edition: Vector is the world’s fastest analytic database designed from ground-up to exploit x86 architecture.

Dyadic EKM Server Image: Dyadic Enterprise Key Management (EKM) lets you manage and control keys in any application deployed in Azure.

Infection Monkey: Open against source attack simulation tool to test the resilience of Azure deployments cyber-attacks.

Maestro Server V6: The power of Profisee Base Server with GRM, SDK, Workflow, and Integrator.

BigDL Spark Deep Learning Library v0.3: Deep Learning framework for distributed

computing leveraging Spark architecture on Xeon CPUs. Feature-parity with TF and Caffe, but with no GPU required.

Informatica Enterprise Data Catalog: Discover and understand data assets across your enterprise with an AI-powered data catalog.

Share

12

Mar

New machine-assisted text classification on Content Moderator now in public preview

This blog post is co-authored by Ashish Jhanwar, Data Scientist, Microsoft

Content Moderator is part of Microsoft Cognitive Services allowing businesses to use machine assisted moderation of text, images, and videos that augment human review.

The text moderation capability now includes a new machine-learning based text classification feature which uses a trained model to identify possible abusive, derogatory or discriminatory language such as slang, abbreviated words, offensive, and intentionally misspelled words for review.

In contrast to the existing text moderation service that flags profanity terms, the text classification feature helps detect potentially undesired content that may be deemed as inappropriate depending on context. In addition, to convey the likelihood of each category it may recommend a human review of the content.

The text classification feature is in preview and supports the English language.

How to use

Content Moderator consists of a set of REST APIs. The text moderation API adds an additional request parameter in the form of classify=True. If you specify the parameter as true, and the auto-detected language of your input text is English, the API will output the additional classification insights as shown in the following sections.

If you specify the language as English for non-English text,

Share

08

Mar

Update management, inventory, and change tracking in Azure Automation now generally available

Azure Automation provides the ability to automate, configure, and deploy updates across your hybrid environment using serverless automation. These capabilities are now generally available for all customers.

With the release of these new capabilities, you can now:

Get an inventory of operating system resources including installed applications and other configuration items. Get update compliance and deploy required fixes for Windows and Linux systems across hybrid environments. Track changes across services, daemons, software, registry, and files to promptly investigate issues.

These additional capabilities are now available from the Azure Resource Manager virtual machine (VM) experience as well as from the Automation account when managing at scale within the Azure portal.

Azure virtual machine integration

Integration with virtual machines enables update management, inventory, and change tracking for Windows and Linux computers directly from the VM blade.

With update management, you will always know the compliance status for Windows and Linux, and you can create scheduled deployments to orchestrate the installation of updates within a defined maintenance window. The ability to exclude specific updates is also available, with detailed troubleshooting logs to identify any issues during the deployment.

The inventory of your VM in-guest resources gives you visibility into installed applications as

Share