Category Archives : Artificial Intelligence

10

Jan

Implement predictive analytics for manufacturing with Symphony Industrial AI

Technology allows manufacturers to generate more data than traditional systems and users can digest. Predictive analytics, enabled by big data and cloud technologies, can take advantage of this data and provide new and unique insights into the health of manufacturing equipment and processes. While most manufacturers understand the value of predictive analytics, many find it challenging to introduce into the line of business. Symphony Industrial AI has a mission: to bring the promise of Industrial IoT (IIoT) and artificial intelligence (AI) to reality by delivering real value to their customers through predictive operations solutions. Two solutions by Symphony are specially tailored to the process manufacturing sector (chemicals, refining, pulp and paper, metals and mining, oil, and gas).

There are two solutions offered by Symphony Industrial AI:

Asset 360 AI Process 360 AI

The first focuses on existing machinery, and the second on common processes.

Problem: the complexity of data science

Manufacturers have deep knowledge of their manufacturing processes, but they typically lack the expertise of data scientists, who have a deep understanding of statistical modeling, a fundamental component of most predictive analytics applications. And when the application of predictive analytics is a success, most deployments fail to provide users with

Share

09

Jan

CES 2019: Microsoft partners, customers showcase breakthrough innovation with Azure IoT, AI, and Mixed Reality

Each year at CES, we see dozens of new product innovations that bring additional convenience, entertainment, efficiency – or completely new experiences to our daily lives. By bringing the power of the cloud to connected devices, the Internet of Things (IoT) and artificial intelligence (AI) have played an ever-expanding role in driving the connected product business opportunity. Today, smart thermostats, speakers, TVs, appliances, cars, and more are no longer serving an “early adopter” market – they are entering the mainstream – as people look for technology to help enrich how they plan and experience their daily lives.

Our Azure IoT and AI strategy enables customers to build these new products and solutions using the power of the intelligent cloud and intelligent edge, at scale. The Azure IoT platform helps customers build consistent AI-based applications and experiences from the cloud to the edge, that are adaptive and responsive to physical environments – from smart cities and spaces to connected products in homes and on the manufacturing floor. Our Azure AI services combine the latest advances in technologies like machine learning and deep learning, with our comprehensive data, Azure cloud and productivity platform, and a trusted, enterprise grade approach.

We are continuing

Share

08

Jan

Multi-modal topic inferencing from videos
Multi-modal topic inferencing from videos

Any organization that has a large media archive struggles with the same challenge – how can we transform our media archives into business value? Media content management is hard, and so is content discovery at scale. Content categorization by topics is an intuitive approach that makes it easier for people to search for the content they need. However, content categorization is usually deductive and doesn’t necessarily appear explicitly in the video. For example, content that is focused on the topic of ‘healthcare’ may not actually have the word ‘healthcare’ presented in it, which makes the categorization an even harder problem to solve. Many organizations turn to tagging their content manually, which is expensive, time-consuming, error-prone, requires periodic curation, and is not scalable.

In order to make this process much more consistent and effective, cost and timewise, we introduce Multi-modal topic inferencing in Video Indexer. This new capability can intuitively index media content using a cross-channel model to automatically infer topics. The model does so by projecting the video concepts onto three different ontologies – IPTC, Wikipedia, and the Video Indexer hierarchical topic ontology (see more information below). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized

Share

20

Dec

Conversational – AI updates December 2018

This blog post was co-authored by Vishwac Sena Kannan, Principal Program Manager, FUSE Labs.

We are thrilled to present the release of Bot Framework SDK version 4.2 and we want to use this opportunity to provide additional updates on Conversational-AI releases from Microsoft.

In the SDK 4.2 release, the team focused on enhancing monitoring, telemetry, and analytics capabilities of the SDK by improving the integration with Azure App Insights. As with any release, we fixed a number of bugs, continued to improve Language Understanding (LUIS) and QnA integration, and enhanced our engineering practices. There were additional updates across the other areas like language, prompt and dialogs, and connectors and adapters. You can review all the changes that went into 4.2 in the detailed changelog. For more information, view the list of all closed issues.

Telemetry updates for SDK 4.2

With the SDK 4.2 release, we started improving the built-in monitoring, telemetry, and analytics capabilities provided by the SDK. Our goal is to provide developers with the ability to understand their overall bot-health, provide detailed reports about the bot’s conversation quality, as well as tools to understand where conversations fall short. To do that, we decided to further enhance the built-in

Share

17

Dec

Fine-tune natural language processing models using Azure Machine Learning service

In the natural language processing (NLP) domain, pre-trained language representations have traditionally been a key topic for a few important use cases, such as named entity recognition (Sang and Meulder, 2003), question answering (Rajpurkar et al., 2016), and syntactic parsing (McClosky et al., 2010).

The intuition for utilizing a pre-trained model is simple: A deep neural network that is trained on large corpus, say all the Wikipedia data, should have enough knowledge about the underlying relationships between different words and sentences. It should also be easily adapted to a different domain, such as medical or financial domain, with better performance than training from scratch.

Recently, a paper called “BERT: Bidirectional Encoder Representations from Transformers” was published by Devlin, et al, which achieves new state-of-the-art results on 11 NLP tasks, using the pre-trained approach mentioned above. In this technical blog post, we want to show how customers can efficiently and easily fine-tune BERT for their custom applications using Azure Machine Learning Services. We open sourced the code on GitHub.

Intuition behind BERT

The intuition behind the new language model, BERT, is simple yet powerful. Researchers believe that a large enough deep neural network model, with large enough training corpus, could capture

Share

10

Dec

Cloud Commercial Communities webinar and podcast update

Welcome to the Cloud Commercial Communities monthly webinar and podcast update. Each month the team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live Q&A. If you miss a session, you can review it on demand. Also consider subscribing to the industry podcasts to keep up to date with industry news.

Happening in December Webinars

Transform Your Business with AI at Microsoft

December 4, 2018 at 11:00 AM Pacific Time

Explore AI industry trends and how the Microsoft AI platform can empower your business processes with Azure AI Services including bots, cognitive services, and Azure machine learning.

Azure Marketplace and AppSource Publisher Onboarding and Support

December 11, 2018 at 11:00 AM Pacific Time

Learn the publisher onboarding process, best practices around common blockers, plus support resources available.

Build Scalable Cloud Applications with Containers on Azure

December 17, 2018 at 1:00 PM Pacific Time

Overview of Azure Container Registry, Azure Container Instances (ACI), Azure Kubernetes Services (AKS), and release automation tools with live demos.

Podcasts

Blockchain, Artificial Intelligence, Machine Learning – what does it mean

Share

04

Dec

Azure AI – accelerating the pace of AI adoption for organizations

AI is fueling the next wave of transformative innovations that will change the world. With Azure AI, we empower organizations to easily:

Use machine learning to build predictive models that optimize business processes Utilize advanced vision, speech, and language capabilities to build applications that deliver personalized and engaging experiences Apply knowledge mining to uncover latent insights from vast repositories of files

Building on our announcements at Microsoft Ignite in September, I’m excited to share several new announcements we are making at Microsoft Connect(); to enable organizations to easily apply AI to transform their businesses.

Azure Machine Learning service general availability

Today, we are happy to announce the general availability of Azure Machine Learning service. With Azure Machine Learning service, you can quickly and easily build, train, and deploy machine learning models anywhere from the intelligent cloud to the intelligent edge. With features like automated machine learning, organizations can accelerate their model development by identifying suitable algorithms and machine learning pipelines faster. This helps organizations significantly reduce development time, from days to hours. With hyper-parameter tuning, organizations can tune parameters to enhance model accuracy.

Once the model is developed, organizations can easily deploy and manage their models in the cloud and

Share

29

Nov

Using AI and IoT for disaster management

In countries around the world, natural disasters have been much in the news. If you had a hunch such calamities were increasing, you’re right. In 2017, hurricanes, earthquakes, and wildfires cost $306 billion worldwide, nearly double 2016’s losses of $188 billion.

Natural disasters caused by climate change, extreme weather, and aging and poorly designed infrastructure, among other risks, represent a significant risk to human life and communities. Globally, $94 trillion in new investment is needed to keep pace with population growth, with a large portion of that going toward repair of the built environment. These projects have long cycles due to government authorization processes, huge financial investments, and multi-year building efforts. We need to think creatively about how to accelerate these processes now.

National, state, and local governments and organizations are also grappling with how to update disaster management practices to keep up. The Internet of Things (IoT), artificial intelligence (AI), and machine learning can help. These technologies can improve readiness and lessen the human and infrastructure costs of major events when they do occur. Disaster modeling is an important start and can help shape comprehensive programs to reduce disasters and respond to them effectively.

Anticipating disasters with better data

Share

27

Nov

Join us on November 28 for our next meetup: Adopting Emerging Tech in Government

Join us as we discuss the leading edge of emerging technology in government at the next Microsoft Azure Government DC meetup, Adopting Emerging Tech in Government, on Wednesday, November 28, 2018 from 6:00 to 8:30 PM Eastern Time at 1776 in Washington, DC. If you’re not in the DC-metro area, we invite you to join us via livestream starting at 6:35 PM Eastern Time at aka.ms/azuregovmeetuplive.

You’ll hear how agencies are approaching strategy, challenges, use cases, and workforce readiness as they leverage emerging tech to innovate for their mission including blockchain, artificial intelligence, machine learning, and augmented reality.

RSVP and join us to gain insight from innovators across government agencies who are exploring ways to apply emerging tech to empower their workforce and deliver innovative services to the public.

Featured speakers include:

COL David Robinson, Military Deputy (Acting), Defense Innovation Unit Experimental Ju-Lie McReynolds, Program Manager, US Digital Service, Health & Human Services Meagan Metzger, CEO, Dcode Sujit Mohanty, DoD CTO, Microsoft Federal Jeff Butte, Senior Program Manager, Microsoft Azure Global Government Karina Homme, Senior Director, Microsoft Azure Government About the Microsoft Azure Government user community

The Azure Government DC user community was created as a place to bring

Share

26

Nov

Running Cognitive Service containers

Last week we announced a preview of Docker support for Microsoft Azure Cognitive Services with an initial set of containers ranging from Computer Vision and Face, to Text Analytics. Here we will focus on trying things out, firing up a cognitive service container, and seeing what it can do. For more details on which containers are available and what they offer, read the blog post “Getting started with these Azure Cognitive Service Containers.”

Installing Docker

You can run docker in many contexts, and for production environments you will definitely want to look at Azure Kubernetes Service (AKS) or Azure Service Fabric. In subsequent blogs we will dive into doing this in detail, but for now all we want to do is fire up a container on a local dev-box which works great for dev/test scenarios.

You can run Docker desktop on most dev-boxes, just download and follow the instructions. Once installed, make sure that Docker is configured to have at least 4G of RAM (one CPU is sufficient). In Docker for Windows it should look something like this:

Getting the images

The Text Analytics images are available directly from Docker Hub as follows:

Key phrase extraction extracts key talking

Share