Category Archives : Updates



New offers in Azure Marketplace – February 2018

We continue to expand the Azure Marketplace ecosystem. In February 2018, 81 new offers successfully met the onboarding criteria and went live.

See details of the new offers below:

Sensitive Data Discovery and De-Id Tool (SDDT): SDDT simplifies and automates organization’s compliance with the GLBA, HIPAA, PCI, GDPR.

Actian Vector Analytic Database Community Edition: Vector is the world’s fastest analytic database designed from ground-up to exploit x86 architecture.

Dyadic EKM Server Image: Dyadic Enterprise Key Management (EKM) lets you manage and control keys in any application deployed in Azure.

Infection Monkey: Open against source attack simulation tool to test the resilience of Azure deployments cyber-attacks.

Maestro Server V6: The power of Profisee Base Server with GRM, SDK, Workflow, and Integrator.

BigDL Spark Deep Learning Library v0.3: Deep Learning framework for distributed

computing leveraging Spark architecture on Xeon CPUs. Feature-parity with TF and Caffe, but with no GPU required.

Informatica Enterprise Data Catalog: Discover and understand data assets across your enterprise with an AI-powered data catalog.



New machine-assisted text classification on Content Moderator now in public preview

This blog post is co-authored by Ashish Jhanwar, Data Scientist, Microsoft

Content Moderator is part of Microsoft Cognitive Services allowing businesses to use machine assisted moderation of text, images, and videos that augment human review.

The text moderation capability now includes a new machine-learning based text classification feature which uses a trained model to identify possible abusive, derogatory or discriminatory language such as slang, abbreviated words, offensive, and intentionally misspelled words for review.

In contrast to the existing text moderation service that flags profanity terms, the text classification feature helps detect potentially undesired content that may be deemed as inappropriate depending on context. In addition, to convey the likelihood of each category it may recommend a human review of the content.

The text classification feature is in preview and supports the English language.

How to use

Content Moderator consists of a set of REST APIs. The text moderation API adds an additional request parameter in the form of classify=True. If you specify the parameter as true, and the auto-detected language of your input text is English, the API will output the additional classification insights as shown in the following sections.

If you specify the language as English for non-English text,



Update management, inventory, and change tracking in Azure Automation now generally available

Azure Automation provides the ability to automate, configure, and deploy updates across your hybrid environment using serverless automation. These capabilities are now generally available for all customers.

With the release of these new capabilities, you can now:

Get an inventory of operating system resources including installed applications and other configuration items. Get update compliance and deploy required fixes for Windows and Linux systems across hybrid environments. Track changes across services, daemons, software, registry, and files to promptly investigate issues.

These additional capabilities are now available from the Azure Resource Manager virtual machine (VM) experience as well as from the Automation account when managing at scale within the Azure portal.

Azure virtual machine integration

Integration with virtual machines enables update management, inventory, and change tracking for Windows and Linux computers directly from the VM blade.

With update management, you will always know the compliance status for Windows and Linux, and you can create scheduled deployments to orchestrate the installation of updates within a defined maintenance window. The ability to exclude specific updates is also available, with detailed troubleshooting logs to identify any issues during the deployment.

The inventory of your VM in-guest resources gives you visibility into installed applications as



Faster Metric Alerts for Logs now in limited public preview
Faster Metric Alerts for Logs now in limited public preview

I am happy to share the public preview of a new capability – Metric Alerts on Logs for OMS Log Analytics. With this capability, customers can get lower latency alerts from logs. Interested…read on!

Customers rely on alerts in Azure monitoring tools to stay on top of issues. While log alerts are popular, one concern is the time it takes to find the trace patterns and trigger alerts. We are happy to share the limited public preview of the new capability Metric Alerts for Logs that brings down the time it takes to generate a log alert to sub 5 minutes.

The Metric Alerts on Logs preview currently supports the following log types on OMS Log Analytics – heartbeat, perf counters (including those from SCOM), and update. To see full list of the generated metrics from logs, see our documentation. In the near future, we plan to expand this list to include events.

In this blog post, we will walk through the new feature. But first, for those who are curious, we will do a quick peek under the hood to show how it all works.   

Monitoring tools rely on telemetry emitted by the underlying resources. This telemetry is



Azure Data Lake launches in the West Europe region
Azure Data Lake launches in the West Europe region

Azure Data Lake Store and Azure Data Lake Analytics are now generally available in the West Europe region, in addition to the previously announced regions of East US 2, Central US, and North Europe.

Azure Data Lake Store is a hyperscale enterprise data lake in the cloud that is secure, massively scalable, and built to the open HDFS standard. Data from disparate data sources can be brought together into a single data lake so that all your analytics can run in one place. From first class integration with AAD to fine grained access control, built-in enterprise grade security makes managing security easy for even the largest organizations. With no limits to the size of data and the ability to run massively parallel analytics, you can now unlock value from all your analytics data at ultra-fast speeds.

Azure Data Lake Analytics is a distributed on-demand analytics job service that dynamically scales so you can focus on achieving your business goals, not on managing distributed infrastructure. The analytics service can handle jobs of any scale by letting you select how many parallel compute resources a job can scale to. You only pay for your job when it is running, making it cost-effective.



Azure Load Balancer to become more efficient
Azure Load Balancer to become more efficient

Azure recently introduced an advanced, more efficient Load Balancer platform. This platform adds a whole new set of abilities for customer workloads using the new Standard Load Balancer. One of the key additions the new Load Balancer platform brings, is a simplified, more predictable and efficient outbound port allocation algorithm.

While already integrated with Standard Load Balancer, we are now bringing this advantage to the rest of Azure.

Load Balancer and Source NAT

Azure deployments use one or more of three scenarios for outbound connectivity, depending on the customer’s deployment model and the resources utilized and configured. Azure uses Source Network Address Translation (SNAT) to enable these scenarios. When multiple private IP addresses or roles share the same public IP (public IP address assign to Load Balancer or automatically assigned public IP address for standalone VMs), Azure uses port masquerading SNAT (PAT) to translate private IP addresses to public IP addresses using the ephemeral ports of the public IP address. PAT does not apply when Instance Level Public IP addresses (ILPIP) are assigned.

For the cases where multiple instances share a public IP address, each instance behind an Azure Load Balancer VIP is pre-allocated a fixed number of ephemeral ports



Unlock Query Performance with SQL Data Warehouse using Graphical Execution Plans

The Graphical Execution Plan feature within SQL Server Management Studio (SSMS) is now supported for SQL Data Warehouse (SQL DW)! With a click of a button, you can create a graphical representation of a distributed query plan for SQL DW.

Before this enhancement, query troubleshooting for SQL DW was often a tedious process, which required you to run the EXPLAIN command. SQL DW customers can now seamlessly and visually debug query plans to identify performance bottlenecks directly within the SSMS window. This experience extends the query troubleshooting experience by displaying costly data movement operations which are the most common reasons for slow distributed query plans. Below is a simple example of troubleshooting a distributed query plan with SQL DW leveraging the Graphical Execution Plan.

The view below displays the estimated execution plan for a query. As we can see, this is an incompatible join which occurs when there is a join between two tables distributed on different columns. An incompatible join will create a ShuffleMove operation, where temp tables will be created on every distribution to satisfy the join locally before streaming the results back to the user. The ShuffleMove has become a performance bottleneck for this query:




New Azure GxP guidelines help pharmaceutical and biotech customers build GxP solutions

We recently released a detailed set of GxP qualification guidelines for our Azure customers. These guidelines give life sciences organizations, such as pharmaceutical and biotechnology companies, a comprehensive toolset for building solutions that meet GxP compliance regulations. 

GxP is a general abbreviation for “good practice” quality guidelines and regulations. Technology systems that use GxP processes such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), and Good Manufacturing Practices (GMP) require validation of adherence to GxP. Solutions are considered qualified when they can demonstrate the ability to fulfill GxP requirements. GxP regulations include pharmaceutical requirements, such as those outlined in the U.S. Food and Drug Administration CFR Title 21 Part 11, and EU GMP Annex 11.  

Life sciences organizations are increasingly moving to the cloud to increase efficiency and reduce costs, but in order to do so they must be able to select a cloud service provider with processes and controls that help to assure the confidentiality, integrity, and availability of data stored in the cloud. Of equal importance are those processes and controls that must be implemented by life sciences customers to ensure that GxP systems are maintained in a secured and validated state.

Life sciences organizations building GxP



Cray in Azure for weather forecasting

When we announced our partnership with Cray, it was very exciting news. I received my undergraduate degree in meteorology, so my mind immediately went to how this could be a benefit to weather forecasting.

Weather modeling is an interesting use case. It requires a large number of cores with a low-latency interconnect, and it is very time sensitive. After all, what good is a one hour weather forecast if it takes 90 minutes to run? And weather is a very local phenomenon. In order to resolve smaller scale features without shrinking the domain or lengthening runtime, modelers must add more cores. A global weather model with a 0.5 degree grid spacing can require as many as 50,000 cores.

At that large of a scale, and with the performance required to be operationally useful, a Cray supercomputer is an excellent fit. But the model by itself doesn’t mean much. The model data needs to be processed to generate products. This is where Azure services come in.

Website images are one obvious product of weather models. Image generation programs require small scale and can be done in parallel, so they’re great for using the elasticity of Azure virtual machines. The same can



ExpressRoute monitoring with Network Performance Monitor (NPM) is now generally available

We are excited to share the general availability of ExpressRoute monitoring with Network Performance Monitor (NPM). A few months ago, we announced ExpressRoute Monitor with NPM in public preview. Since then, we’ve seen lots of users monitor their Azure ExpressRoute private peering connections, and working with customers we’ve gathered a lot of great feedback. While we’re not done working to make ExpressRoute monitoring best in class, we’re ready and eager for everyone to get their hands on it. In this post, I’ll take you through some of the capabilities that ExpressRoute Monitor provides. To get started, watch a brief demo video explaining ExpressRoute monitoring capability in Network Performance Monitor.

Monitor connectivity to Azure VNETs, over ExpressRoute

NPM can monitor the packet loss and network latency between your on-premises resources (branch offices, datacenters, and office sites) and Azure VNETs connected through an ExpressRoute. You can setup alerts to get proactively notified whenever the loss or latency crosses the threshold. In addition to viewing the near real-time values and historical trends of the performance data, you can use the network state recorder to go back in time to view particular network state in order to investigate the difficult-to-catch transient issues.

Get end-to-end