Category Archives : #AzureML

21

May

Enterprise Deployment Tips for Azure Data Science Virtual Machine (DSVM)

This post is authored by Gopi Kumar, Principal Program Manager at Microsoft.

The Data Science Virtual Machine (DSVM), a popular VM image on the Azure marketplace, is a purpose-built cloud-based environment with a host of preconfigured data and AI tools. It enables data scientists and AI developers to iterate on developing high quality predictive models and deep learning architectures and helps them become much more productive when developing their AI applications. DSVM has been offered for over two years now and, during that time, it has seen a wide range of users, from small startups to enterprises with large data science teams who use DSVM as their core cloud development and experimentation environment for building production applications and models.

Deploying AI infrastructure at scale can be quite challenging for large enterprise teams. However, Azure Infrastructure provides several services supporting enterprise IT needs, such as around security, scaling, reliability, availability, performance and collaboration. The Data Science VM can readily leverage these services in Azure to support the deployment of large scale enterprise team -based Data Science and AI environments. We have assembled guidance for an initial list of common enterprise scenarios in a new DSVM documentation section dedicated to enterprise

21

May

Improving Medical Imaging Diagnostics Using Azure Machine Learning Package for Computer Vision

This post is by Ye Xing, Senior Data Scientist, Tao Wu, Principle Data Scientist Manager, and Patrick Buehler, Senior Data Scientist, at Microsoft.

The advancement of medical imaging, as in many other scientific disciplines, relies heavily on the latest advances in tools and methodologies that make rapid iterations possible. We recently witnessed this first-hand when we developed a deep learning model on the newly released Azure Machine Learning Package for Computer Vision (AML-CVP) and were able to improve upon a state-of-the-art algorithm in screening blinding retinal diseases. Our pipeline, based on AML-CVP, reduced misclassification by over 90% (from 3.9% down to 0.3%) without any parameter tuning. The deep learning model training was completed in 10 minutes over 83,484 images on the Azure Deep Learning Virtual Machine equipped with a single NVIDIA V100 GPU. This pipeline can be constructed quickly, with less than 20 lines of Python code, thanks to the benefit of the high-level Python AML-CVP API.

Our work was inspired by the paper “Identifying Medical Diagnosis and Treatable Diseases by Image-Based Deep learning“, published on Cell, a leading medical journal, in February 2018. The paper developed a deep learning AI system to identify two vision-threatening retinal diseases – choroidal

15

May

A Scalable End-to-End Anomaly Detection System using Azure Batch AI

This post is authored by Said Bleik, Senior Data Scientist at Microsoft.

In a previous post I showed how Batch AI can be used to train many anomaly detection models in parallel for IoT scenarios. Although model training tasks are usually the most demanding ones in AI applications, making predictions at scale on a continuous basis can be challenging as well. This is especially common in IoT applications where data streams of many devices need to be processed and scored constantly, often in real-time.

To complete the pipeline of an end-to-end solution I’ve created a walkthrough on GitHub that includes submitting and scheduling prediction jobs in addition to training of models. The solution comprises several Azure cloud services and Python code that interacts with those services. The scheduling component allows continuous training and scoring in a production environment. The diagram below shows the proposed solution architecture where the main components are Azure services that can easily connect to each other through configuration or SDKs. This is a general solution and only one way of designing predictive maintenance solutions. In practice, you can replace any of these components with your favorite tools/services or add more components to handle the complexities of

07

May

Azure Machine Learning Packages for Vision, Text and Forecasting in Public Preview

This post is authored by Matt Conners, Principal Program Manager, and Neta Haiby, Principal Program Manager at Microsoft.

Earlier today, at Build 2018, we made a set of Azure AI Platform announcements, including the public preview release of Azure Machine Learning Packages for Computer Vision, Text Analytics, and Forecasting. The Azure Machine Learning Packages are Python pip-installable extensions for Azure Machine Learning. The packages provide a wide range of functional APIs to innovative, complex, and cumbersome techniques that are useful to solving data science problems in the domains of vision, text, and forecasting. These high-level APIs boost productivity for data scientists and AI developers and help them build high quality accurate models by using the new algorithms built into these packages for tasks like feature generation, parameter tuning, and model selection. The Azure ML Packages enable rapid time to solution by abstracting the pain points involved in model creation, deployment, and management.

Additionally, the Azure ML Packages provide data scientists and AI developers with flexibility to utilize state of the art technologies by providing interoperability with common frameworks, such as keras, sckit-learn, Tensorflow, and CNTK.

Why Use Azure Machine Learning Packages?

There are many great reasons to use Azure Machine

02

May

Kubernetes Load Testing
Kubernetes Load Testing

This post is authored by Daniel Grecoe, Senior Software Engineer at Microsoft.

Today many platforms are moving towards hosting artificial intelligence models in self-managed container services such as Kubernetes. At Microsoft this is a substantial change from Azure Machine Learning Studio which provided all the model management and operationalization services for the user automatically.

To meet this need of self-managed container services Microsoft has introduced the Azure Machine Learning Workbench tool and the Azure services Machine Learning Experimentation, Machine Learning Model Management, Azure Container Registry and Azure Container Service.

With these new tools and services data science teams now have the freedom of wider language and model selection when creating AI new services, coupled with the choice of the infrastructure it is operationalized on. This choice enables the team to appropriately size the container service to meet the business requirements set forth for the model being operationalized while controlling costs associated with the service.

My recent blog post on Scaling Azure Container Service Clusters discussed determining the required size of a Kubernetes cluster based on formulae. The formulae took into account service latency, requests per second, and the hardware it is being operationalized on. The blog notes that the formulae

01

May

How to Develop a Currency Detection Model using Azure Machine Learning

This post is authored by Xiaoyong Zhu, Anirudh Koul and Wee Hyong Tok of Microsoft.

Introduction

How does one teach a machine to see?

Seeing AI is an exciting Microsoft research project that harnesses the power of Artificial Intelligence to open the visual world and describe nearby people, objects, text, colors and more using spoken audio. Designed for the blind and low vision community, it helps users understand more about their environment, including who and what is around them. Today, our iOS app has empowered users to complete over 5 million tasks unassisted, including many “first in a lifetime” experiences for the blind community, such as taking and posting photos of their friends on Facebook, independently identifying products when shopping at a store, reading homework to kids, and much more. To learn more about Seeing.AI you can visit our web page here.

One of the most common needs of the blind community is the ability to recognize paper currency. Currency notes are usually inaccessible, being hard to recognize purely through our tactile senses. To address this need, the Seeing AI team built a real time currency recognizer which can uses spoken audio to identify the currency that is currently in

25

Apr

Transfer Learning for Text using Deep Learning Virtual Machine (DLVM)

This post is by Anusua Trivedi, Data Scientist, and Wee Hyong Tok, Data Scientist Manager, at Microsoft.

Motivation

Modern machine learning models, especially deep neural networks, can often benefit quite significantly from transfer learning. In computer vision, deep convolutional neural networks trained on a large image classification datasets such as ImageNet have proved to be useful for initializing models on other vision tasks, such as object detection (Zeiler and Fergus, 2014).

But how can we leverage the transfer leaning technique for text? In this blog post, we attempt to capture a comprehensive study of existing text transfer learning literature in the research community. We explore eight popular machine reading comprehension (MRC) algorithms (Figure 1). We evaluate and compare six of these papers – BIDAF, DOCQA, ReasoNet, R-NET, SynNet and OpenNMT. We initialize our models, pre-trained on different source question answering (QA) datasets, and show how standard transfer learning can achieve results on a large target corpus. For creating a test corpus, we chose the book Future Computed by Harry Shum and Brad Smith.

We compared the performance of the transfer learning approach for creating a QA system for this book using these pretrained MRC models. For our evaluation scenario, the

24

Apr

Deep Learning for Emojis with VS Code Tools for AI
Deep Learning for Emojis with VS Code Tools for AI

This post is the first in a two-part series, and is authored by Erika Menezes, Software Engineer at Microsoft.

Visual content has always been a critical part of communication. Emojis are increasingly playing a crucial role in human dialogue conducted on leading social media and messaging platforms. Concise and fun to use, emojis can help improve communication between users and make dialogue systems more anthropomorphic and vivid.

We also see an increasing investment in chatbots that allow users to complete task-oriented services such as purchasing auto insurance or movie tickets, or checking in for flights, etc., in a frictionless and personalized way from right within messaging apps. Most such chatbot conversations, however, can seem rather different from typical conversational chats between humans. By using allowing the use of emojis in a task completion context, we may be able to improve the conversational user experience (UX) and enable users to get their tasks accomplished in a faster and more intuitive way.

We present a deep learning approach that uses semantic representation of words (word2vec) and emojis (emoji2vec) to understand conversational human input in a task-oriented context.

In this blog post, we show how to use embeddings created via deep learning techniques

19

Apr

Deploying Deep Learning Models on Kubernetes with GPUs
Deploying Deep Learning Models on Kubernetes with GPUs

This post is authored by Mathew Salvaris and Fidan Boylu Uz, Senior Data Scientists at Microsoft.

One of the major challenges that data scientists often face is closing the gap between training a deep learning model and deploying it at production scale. Training of these models is a resource intensive task that requires a lot of computational power and is typically done using GPUs. The resource requirement is less of a problem for deployment since inference tends not to pose as heavy a computational burden as training. However, for inference, other goals also become pertinent such as maximizing throughput and minimizing latency. When inference speed is a bottleneck, GPUs show considerable performance gains over CPUs. Coupled with containerized applications and container orchestrators like Kubernetes, it is now possible to go from training to deployment with GPUs faster and more easily while satisfying latency and throughput goals for production grade deployments.

In this tutorial, we provide step-by-step instructions to go from loading a pre-trained Convolutional Neural Network model to creating a containerized web application that is hosted on Kubernetes cluster with GPUs on Azure Container Service (AKS). AKS makes it quick and easy to deploy and manage containerized applications without much

05

Apr

Inventory Optimization Solution in the Azure AI Gallery
Inventory Optimization Solution in the Azure AI Gallery

This post is co-authored by Dmitry Pechyoni, Senior Data Scientist, Hong Lu and Chenhui Hu, Data Scientists, Praneet Solanki, Software Engineer, and Ilan Reiter, Principal Data Scientist Manager at Microsoft.

For retailers, inventory optimization is a critical task to facilitate production planning, cost reduction, and operation management. Tremendous business value can be derived by optimizing the inventories. For example, optimizing product ordering strategy can help minimize inventory cost and reduce stockout events. This also improves customer satisfaction levels. However, existing on-premises and cloud-based inventory optimization solutions only provide limited flexibility of inventory optimization strategies and, in many cases, cannot be customized to meet the specific business rules of each retailer.

We recently published a cloud-based inventory optimization solution for retail in the Azure AI Gallery. We designed this solution to be flexible, scalable and fully automated. Operations researchers and developers could use this solution as a baseline and further extend and customize it to the business goals and constraints of retailers.

Our inventory optimization solution generates product orders with optimized quantities and schedule, based on a given forecasted demand, storage and transportation costs, and a set of constraints. The optimized orders are stored in Azure and can be further integrated