Category Archives : Artificial Intelligence

26

Mar

Accelerated AI with Azure Machine Learning service on Azure Data Box Edge

Along with the general availability of Azure Data Box Edge that was announced today, we are announcing the preview of Azure Machine Learning hardware accelerated models on Data Box Edge. The majority of the world’s data in real-world applications is used at the edge. For example, images and videos collected from factories, retail stores, or hospitals are used for manufacturing defect analysis, inventory out-of-stock detection, and diagnostics. Applying machine learning models to the data on Data Box Edge provides lower latency and savings on bandwidth costs, while enabling real-time insights and speed to action for critical business decisions.

Azure Machine Learning service is already a generally available, end-to-end, enterprise-grade, and compliant data science platform. Azure Machine Learning service enables data scientists to simplify and accelerate the building, training, and deployment of machine learning models. All these capabilities are accessed from your favorite Python environment using the latest open-source frameworks, such as PyTorch, TensorFlow, and scikit-learn. These models can run today on CPUs and GPUs, but this preview expands that to field programmable gate arrays (FPGA) on Data Box Edge.

What is in this preview?

This preview enhances Azure Machine Learning service by enabling you to train a TensorFlow model

Share

20

Mar

Microsoft and NVIDIA bring GPU-accelerated machine learning to more developers

With ever-increasing data volume and latency requirements, GPUs have become an indispensable tool for doing machine learning (ML) at scale. This week, we are excited to announce two integrations that Microsoft and NVIDIA have built together to unlock industry-leading GPU acceleration for more developers and data scientists.

Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, an open source software library from NVIDIA that allows traditional machine learning practitioners to easily accelerate their pipelines with NVIDIA GPUs ONNX Runtime has integrated the NVIDIA TensorRT acceleration library, enabling deep learning practitioners to achieve lightning-fast inferencing regardless of their choice of framework.

These integrations build on an already-rich infusion of NVIDIA GPU technology on Azure to speed up the entire ML pipeline.

“NVIDIA and Microsoft are committed to accelerating the end-to-end data science pipeline for developers and data scientists regardless of their choice of framework,” says Kari Briski, Senior Director of Product Management for Accelerated Computing Software at NVIDIA. “By integrating NVIDIA TensorRT with ONNX Runtime and RAPIDS with Azure Machine Learning service, we’ve made it easier for machine learning practitioners to leverage NVIDIA GPUs across their data science workflows.”

Azure Machine Learning service integration with NVIDIA

Share

18

Mar

Microsoft and NVIDIA extend video analytics to the intelligent edge

Artificial Intelligence (AI) algorithms are becoming more intelligent and sophisticated every day, allowing IoT devices like cameras to bridge the physical and digital worlds. The algorithms can trigger alerts and take actions automatically — from finding available parking spots and missing items in a retail store to detecting anomalies on solar panels or workers approaching hazardous zones.

Processing these state-of-the-art AI algorithms in a datacenter requires a stable high-bandwidth connection to deliver videos feeds to the cloud. However, these cameras are often located in remote areas with unreliable connectivity or it may not be sensible given bandwidth, security, and regulatory needs.

Microsoft and NVIDIA are partnering on a new approach for intelligent video analytics at the edge to transform raw, high-bandwidth videos into lightweight telemetry. This delivers real-time performance and reduces compute costs for users. The “cameras-as-sensors” and edge workloads are managed locally by Azure IoT Edge and the camera stream processing is powered by NVIDIA DeepStream. Once the videos are converted, the data can be ingested to the cloud using Azure IoT Hub.

The companies plan to offer customers enterprise-ready devices running DeepStream in the Azure IoT device catalog, and the NVIDIA DeepStream module will soon be made available

Share

18

Mar

Azure Machine Learning service now supports NVIDIA’s RAPIDS

Azure Machine Learning service is the first major cloud ML service to support NVIDIA’s RAPIDS, a suite of software libraries for accelerating traditional machine learning pipelines with NVIDIA GPUs.

Just as GPUs revolutionized deep learning through unprecedented training and inferencing performance, RAPIDS enables traditional machine learning practitioners to unlock game-changing performance with GPUs. With RAPIDS on Azure Machine Learning service, users can accelerate the entire machine learning pipeline, including data processing, training and inferencing, with GPUs from the NC_v3NC_v2, ND or ND_v2 families. Users can unlock performance gains of more than 20X (with 4 GPUs), slashing training times from hours to minutes and dramatically reducing time-to-insight.

The following figure compares training times on CPU and GPUs (Azure NC24s_v3) for a gradient boosted decision tree model using XGBoost. As shown below, performance gains increase with the number of GPUs. In the Jupyter notebook linked below, we’ll walk through how to reproduce these results step by step using RAPIDS on Azure Machine Learning service.

How to use RAPIDS on Azure Machine Learning service

Everything you need to use RAPIDS on Azure Machine Learning service can be found on GitHub.

The above repository consists of a master Jupyter Notebook that uses

Share

18

Mar

ONNX Runtime integration with NVIDIA TensorRT in preview

Today we are excited to open source the preview of the NVIDIA TensorRT execution provider in ONNX Runtime. With this release, we are taking another step towards open and interoperable AI by enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can now tap into the power of TensorRT through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, and many other popular frameworks.

Microsoft and NVIDIA worked closely to integrate the TensorRT execution provider with ONNX Runtime and have validated support for all the ONNX Models in the model zoo. With the TensorRT execution provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. We have seen up to 2X improved performance using the TensorRT execution provider on internal workloads from Bing MultiMedia services.

How it works

ONNX Runtime together with its TensorRT execution provider accelerates the inferencing of deep learning models by parsing the graph and allocating specific nodes for execution by the TensorRT stack in supported hardware. The TensorRT execution provider interfaces with the TensorRT libraries that are preinstalled in the platform to process the ONNX sub-graph

Share

14

Mar

Maximize existing vision systems in quality assurance with Cognitive AI

Quality assurance matters to manufacturers. The reputation and bottom line of a company can be adversely affected if defective products are released. If a defect is not detected, and the flawed product is not removed early in the production process, the damage can run in the hundreds of dollars per unit. To mitigate this, many manufacturers install cameras to monitor their products as they move along the production line. But the data may not always be useful. For example, cameras alone often struggle with identifying defects at high volume of images moving at high speed. Now, a solution provider has developed a way to integrate such existing systems into quality assurance management. Mariner, with its Spyglass solution, uses AI from Azure to achieve visibility over the entire line, and to prevent product defects before they become a problem.

Quality assurance expenses

Quality assurance (QA) management in manufacturing is time-consuming and expensive, but critical. The effects of poor quality are substantial, as they result in:

Re-work costs Production inefficiencies Wasted materials Expensive and embarrassing recalls 

And worst of all, dissatisfied customers that demand returns. 

Multiple variables across multiple facilities

Too many variables make product defect analysis and prediction difficult. Manufacturers need

Share

14

Mar

Hardware innovation for data growth challenges at cloud-scale

The Open Compute Project (OCP) Global Summit 2019 kicks off today in San Jose where a vibrant and growing community is sharing the latest in innovation to make hardware more efficient, flexible, and scalable.

For Microsoft, our journey with OCP began in 2014 when we joined the foundation and contributed the very same server and datacenter designs that power our global Azure cloud, but it didn’t stop there. Each year at the OCP summit, we contribute innovation that addresses the most pressing challenges for our industry, including a modular and globally compatible server design and universal motherboard with Project Olympus to enabling hardware security with Project Cerberus to a next generation specification for SSD storage with Project Denali.

This year we’re turning our attention to the exploding volume of data being created daily. Data is at the heart of digital transformation and companies are leveraging data to improve customer experiences, open new markets, make employees and processes more productive, and create new sources of competitive advantage as they work toward the future of tomorrow.

Data – the engine of Digital Transformation

The Global Datasphere* which quantifies and analyzes the amount of data created, captured, and replicated in any given year

Share

07

Mar

Intel and Microsoft bring optimizations to deep learning on Azure

This post is co-authored with Ravi Panchumarthy and Mattson Thieme from Intel.

We are happy to announce that Microsoft and Intel are partnering to bring optimized deep learning frameworks to Azure. These optimizations are available in a new offering on the Azure marketplace called the Intel Optimized Data Science VM for Linux (Ubuntu).

Over the last few years, deep learning has become the state of the art for several machine learning and cognitive applications. Deep learning is a machine learning technique that leverages neural networks with multiple layers of non-linear transformations, so that the system can learn from data and build accurate models for a wide range of machine learning problems. Computer vision, language understanding, and speech recognition are all examples of deep learning at play today. Innovations in deep neural networks in these domains have enabled these algorithms to reach human level performance in vision, speech recognition and machine translation. Advances in this field continually excite data scientists, organizations and media outlets alike. To many organizations and data scientists, doing deep learning well at scale poses challenges due to technical limitations.

Often, default builds of popular deep learning frameworks like TensorFlow are not fully optimized for training and

Share

05

Mar

Conversational AI updates for March 2019

We are thrilled to share the release of Bot Framework SDK version 4.3 and use this opportunity to provide additional updates for the Conversational AI releases from Microsoft.

New LINE Channel

Microsoft Bot Framework lets you connect with your users wherever your users are. We offer thirteen supported channels, including popular messaging apps like Skype, Microsoft Teams, Slack, Facebook Messenger, Telegram, Kik, and others. We have listened to our developer community and addressed one of the most frequently requested features – added LINE as a new channel. LINE is a popular messaging app with hundreds of millions of users in Japan, Taiwan, Thailand, Indonesia, and other countries.

To enable your bot in the new channel, follow the “Connect a bot to LINE” instructions. You can also navigate to your bot in the Azure portal. Go to the Channels blade, click on the LINE icon, and follow the instructions there.

SDK 4.3

In the 4.3 release, the team focused on improving and simplifying message and activities handling. The Bot Framework Activity schema is the underlying schema used to define the interaction model for bots. With the 4.3 release, we have streamlined the handling of some activity types in the Bot

Share

26

Feb

Running Cognitive Services on Azure IoT Edge

This blog post is co-authored by Emmanuel Bertrand, Senior Program Manager, Azure IoT.

We recently announced Azure Cognitive Services in containers for Computer Vision, Face, Text Analytics, and Language Understanding. You can read more about Azure Cognitive Services containers in this blog, “Brining AI to the edge.”

Today, we are happy to announce the support for running Azure Cognitive Services containers for Text Analytics and Language Understanding containers on edge devices with Azure IoT Edge. This means that all your workloads can be run locally where your data is being generated while keeping the simplicity of the cloud to manage them remotely, securely and at scale.

Whether you don’t have a reliable internet connection, or want to save on bandwidth cost, have super low latency requirements, or are dealing with sensitive data that needs to be analyzed on-site, Azure IoT Edge with the Cognitive Services containers gives you consistency with the cloud. This allows you to run your analysis on-site and a single pane of glass to operate all your sites.

These container images are directly available to try as IoT Edge modules on the Azure Marketplace:

Key Phrase Extraction extracts key talking points and highlights in text either

Share