This post is authored by Said Bleik, Senior Data Scientist at Microsoft.
In a previous post I showed how Batch AI can be used to train many anomaly detection models in parallel for IoT scenarios. Although model training tasks are usually the most demanding ones in AI applications, making predictions at scale on a continuous basis can be challenging as well. This is especially common in IoT applications where data streams of many devices need to be processed and scored constantly, often in real-time.
To complete the pipeline of an end-to-end solution I’ve created a walkthrough on GitHub that includes submitting and scheduling prediction jobs in addition to training of models. The solution comprises several Azure cloud services and Python code that interacts with those services. The scheduling component allows continuous training and scoring in a production environment. The diagram below shows the proposed solution architecture where the main components are Azure services that can easily connect to each other through configuration or SDKs. This is a general solution and only one way of designing predictive maintenance solutions. In practice, you can replace any of these components with your favorite tools/services or add more components to handle the complexities of
This post is authored by Matt Conners, Principal Program Manager, and Neta Haiby, Principal Program Manager at Microsoft.
Earlier today, at Build 2018, we made a set of Azure AI Platform announcements, including the public preview release of Azure Machine Learning Packages for Computer Vision, Text Analytics, and Forecasting. The Azure Machine Learning Packages are Python pip-installable extensions for Azure Machine Learning. The packages provide a wide range of functional APIs to innovative, complex, and cumbersome techniques that are useful to solving data science problems in the domains of vision, text, and forecasting. These high-level APIs boost productivity for data scientists and AI developers and help them build high quality accurate models by using the new algorithms built into these packages for tasks like feature generation, parameter tuning, and model selection. The Azure ML Packages enable rapid time to solution by abstracting the pain points involved in model creation, deployment, and management.
Additionally, the Azure ML Packages provide data scientists and AI developers with flexibility to utilize state of the art technologies by providing interoperability with common frameworks, such as keras, sckit-learn, Tensorflow, and CNTK.
Why Use Azure Machine Learning Packages?
There are many great reasons to use Azure Machine
This post is authored by Daniel Grecoe, Senior Software Engineer at Microsoft.
Today many platforms are moving towards hosting artificial intelligence models in self-managed container services such as Kubernetes. At Microsoft this is a substantial change from Azure Machine Learning Studio which provided all the model management and operationalization services for the user automatically.
To meet this need of self-managed container services Microsoft has introduced the Azure Machine Learning Workbench tool and the Azure services Machine Learning Experimentation, Machine Learning Model Management, Azure Container Registry and Azure Container Service.
With these new tools and services data science teams now have the freedom of wider language and model selection when creating AI new services, coupled with the choice of the infrastructure it is operationalized on. This choice enables the team to appropriately size the container service to meet the business requirements set forth for the model being operationalized while controlling costs associated with the service.
My recent blog post on Scaling Azure Container Service Clusters discussed determining the required size of a Kubernetes cluster based on formulae. The formulae took into account service latency, requests per second, and the hardware it is being operationalized on. The blog notes that the formulae
This post is authored by Xiaoyong Zhu, Anirudh Koul and Wee Hyong Tok of Microsoft.
How does one teach a machine to see?
Seeing AI is an exciting Microsoft research project that harnesses the power of Artificial Intelligence to open the visual world and describe nearby people, objects, text, colors and more using spoken audio. Designed for the blind and low vision community, it helps users understand more about their environment, including who and what is around them. Today, our iOS app has empowered users to complete over 5 million tasks unassisted, including many “first in a lifetime” experiences for the blind community, such as taking and posting photos of their friends on Facebook, independently identifying products when shopping at a store, reading homework to kids, and much more. To learn more about Seeing.AI you can visit our web page here.
One of the most common needs of the blind community is the ability to recognize paper currency. Currency notes are usually inaccessible, being hard to recognize purely through our tactile senses. To address this need, the Seeing AI team built a real time currency recognizer which can uses spoken audio to identify the currency that is currently in
This post is by Anusua Trivedi, Data Scientist, and Wee Hyong Tok, Data Scientist Manager, at Microsoft.
Modern machine learning models, especially deep neural networks, can often benefit quite significantly from transfer learning. In computer vision, deep convolutional neural networks trained on a large image classification datasets such as ImageNet have proved to be useful for initializing models on other vision tasks, such as object detection (Zeiler and Fergus, 2014).
But how can we leverage the transfer leaning technique for text? In this blog post, we attempt to capture a comprehensive study of existing text transfer learning literature in the research community. We explore eight popular machine reading comprehension (MRC) algorithms (Figure 1). We evaluate and compare six of these papers – BIDAF, DOCQA, ReasoNet, R-NET, SynNet and OpenNMT. We initialize our models, pre-trained on different source question answering (QA) datasets, and show how standard transfer learning can achieve results on a large target corpus. For creating a test corpus, we chose the book Future Computed by Harry Shum and Brad Smith.
We compared the performance of the transfer learning approach for creating a QA system for this book using these pretrained MRC models. For our evaluation scenario, the
This post is the first in a two-part series, and is authored by Erika Menezes, Software Engineer at Microsoft.
Visual content has always been a critical part of communication. Emojis are increasingly playing a crucial role in human dialogue conducted on leading social media and messaging platforms. Concise and fun to use, emojis can help improve communication between users and make dialogue systems more anthropomorphic and vivid.
We also see an increasing investment in chatbots that allow users to complete task-oriented services such as purchasing auto insurance or movie tickets, or checking in for flights, etc., in a frictionless and personalized way from right within messaging apps. Most such chatbot conversations, however, can seem rather different from typical conversational chats between humans. By using allowing the use of emojis in a task completion context, we may be able to improve the conversational user experience (UX) and enable users to get their tasks accomplished in a faster and more intuitive way.
We present a deep learning approach that uses semantic representation of words (word2vec) and emojis (emoji2vec) to understand conversational human input in a task-oriented context.
In this blog post, we show how to use embeddings created via deep learning techniques
This post is authored by Mathew Salvaris and Fidan Boylu Uz, Senior Data Scientists at Microsoft.
One of the major challenges that data scientists often face is closing the gap between training a deep learning model and deploying it at production scale. Training of these models is a resource intensive task that requires a lot of computational power and is typically done using GPUs. The resource requirement is less of a problem for deployment since inference tends not to pose as heavy a computational burden as training. However, for inference, other goals also become pertinent such as maximizing throughput and minimizing latency. When inference speed is a bottleneck, GPUs show considerable performance gains over CPUs. Coupled with containerized applications and container orchestrators like Kubernetes, it is now possible to go from training to deployment with GPUs faster and more easily while satisfying latency and throughput goals for production grade deployments.
In this tutorial, we provide step-by-step instructions to go from loading a pre-trained Convolutional Neural Network model to creating a containerized web application that is hosted on Kubernetes cluster with GPUs on Azure Container Service (AKS). AKS makes it quick and easy to deploy and manage containerized applications without much
This post is co-authored by Dmitry Pechyoni, Senior Data Scientist, Hong Lu and Chenhui Hu, Data Scientists, Praneet Solanki, Software Engineer, and Ilan Reiter, Principal Data Scientist Manager at Microsoft.
For retailers, inventory optimization is a critical task to facilitate production planning, cost reduction, and operation management. Tremendous business value can be derived by optimizing the inventories. For example, optimizing product ordering strategy can help minimize inventory cost and reduce stockout events. This also improves customer satisfaction levels. However, existing on-premises and cloud-based inventory optimization solutions only provide limited flexibility of inventory optimization strategies and, in many cases, cannot be customized to meet the specific business rules of each retailer.
We recently published a cloud-based inventory optimization solution for retail in the Azure AI Gallery. We designed this solution to be flexible, scalable and fully automated. Operations researchers and developers could use this solution as a baseline and further extend and customize it to the business goals and constraints of retailers.
Our inventory optimization solution generates product orders with optimized quantities and schedule, based on a given forecasted demand, storage and transportation costs, and a set of constraints. The optimized orders are stored in Azure and can be further integrated
This post is authored by Anusua Trivedi, Carlos Pessoa, Vivek Gupta & Wee Hyong Tok from the Cloud AI Platform team at Microsoft.
AI has emerged as one of the most disruptive forces behind digital transformation and it is revolutionizing the way we live and work. AI-powered experiences are augmenting human capabilities and transforming how we live, work, and play – and they have enormous potential in allowing us to lead healthier lives as well.
AI is empowering clinicians with deep insights that are helping them make better decisions, and the potential to save lives and money is tremendous. At Microsoft, the Health NExT project is looking at innovative approaches to fuse research, AI and industry expertise to enable a new wave of healthcare innovations. The Microsoft AI platform empowers every developer to innovate and accelerate the development of real-time intelligent apps on edge devices. There are a couple of advantages of running intelligent real-time apps on edge devices – you get:
Lowered latency, for local decision making.
Reduced reliance on internet connectivity.
Imagine environments where there’s limited or no connectivity, whether it’s because of lack of communications infrastructure or because of the sensitivity of the
This post is authored by Said Bleik, Senior Data Scientist at Microsoft.
In the IoT world, it’s not uncommon that you’d want to monitor thousands of devices across different sites to ensure normal behavior. Devices can be as small as microcontrollers or as big as aircraft engines and might have sensors attached to them to collect various types of measurements that are of interest. These measurements often carry signals that indicate whether the devices are functioning as expected or not. Sensor data can be used to train predictive models that serve as alarm systems or device monitors that warn when a malfunction or failure is imminent.
In what follows, I will walk you through a simple scalable solution that can handle thousands or even millions of sensors in an IoT setting. I will show how you can train many anomaly detection models (one model for each sensor) in parallel using Azure’s Batch AI. I’ve created a complete training pipeline that includes: a local data simulation app to generate data, an Azure Event Hubs data ingestion service, an Azure Stream Analytics service for real-time processing/aggregation of the readings, an Azure SQL Database to store the processed data, an Azure Batch AI