Category Archives : Cognitive Services

16

May

Accelerate bot development with Bot Framework SDK and other updates

https://azure.microsoft.com/blog/accelerate-bot-development-with-bot-framework-sdk-and-other-updates/

Share

03

May

AI-first content understanding, now across more types of content for even more use cases

This post is authored by Elad Ziklik, Principal Program Manager, Applied AI.

Today, data isn’t the barrier to innovation, usable data is. Real-world information is messy and carries valuable knowledge in ways that are not readily usable and require extensive time, resources, and data science expertise to process. With Knowledge Mining, it’s our mission to close the gap between data and knowledge.

We’re making it easier to uncover latent insights across all your content with:

Azure Search’s cognitive search capability (general availability) Form Recognizer (preview) Cognitive search and expansion into new scenarios

Announced at Microsoft Build 2018, Azure Search’s cognitive search capability uniquely helps developers apply a set of composable cognitive skills to extract knowledge from a wide range of content. Deep integration of cognitive skills within Azure Search enables the application of facial recognition, key phrase extraction, sentiment analysis, and other skills to content with a single click. This knowledge is organized and stored in a search index, enabling new experiences for exploring the data.

Cognitive search, now generally available, delivers:

Faster performance – Improved throughput capabilities with increased processing speeds up to 30 times faster than in preview. Completing previously hour-long tasks in only a couple of minutes.

Share

03

May

A deep dive into what’s new with Azure Cognitive Services

This blog post was co-authored by Tina Coll, Senior Product Marketing Manager, Azure Cognitive Services.

Microsoft Build 2019 marks an important milestone for the evolution of Azure Cognitive Services with the introduction of new services and capabilities for developers. Azure empowers developers to make reinforcement learning real for businesses with the launch of Personalizer. Personalizer, along with Anomaly Detector and Content Moderator, is part of the new Decision category of Cognitive Services that provide recommendations to enable informed and efficient decision-making for users.

Available now in preview and general availability (GA):

Preview

Cognitive service APIs:

Personalizer – creates personalized user experiences Conversation transcription – transcribes in-person meetings in real-time Form Recognizer – automates data-entry Ink Recognizer – unlocks the potential of digital inked content

Container support for businesses AI models at the edge and closer to the data:

Speech Services (Speech to Text & Text to Speech) Anomaly Detector Form Recognizer Generally available Neural Text-to-Speech Computer Vision Read Text Analytics Named Entity Recognition

Cognitive Services span the categories of Vision, Speech, Language, Search, and Decision, offering the most comprehensive portfolio in the market for developers who want to embed the ability to see, hear, translate, decide and more into

Share

03

May

LaLiga entertains millions with Azure-based conversational AI

For LaLiga, keeping fans entertained and engaged is a top priority. And when it comes to fans, the Spanish football league has them in droves, with approximately 1.6 billion social media followers around the world. So any time it introduces a new feature, forum, or app for fans, instant global popularity is almost guaranteed. And while this is great news for LaLiga, it also poses technical challenges—nobody wants systems crashing or going unresponsive when millions of people are trying out a fun new app.

When LaLiga chose to develop a personal digital assistant running on Microsoft Azure, its developers took careful steps to ensure optimal performance in the face of huge user volume in multiple languages across a variety of voice platforms. Specifically, the league used Azure to build a conversational AI solution capable of accommodating the quirks of languages and nicknames to deliver a great experience across multiple channels and handle a global volume of millions of users.

Along the way, some valuable lessons emerged for tackling a deployment of this scope and scale.

Accommodating the quirks of languages and nicknames

The LaLiga virtual assistant has launched for Google Assistant and Skype, and it will eventually support 11

Share

02

May

Making AI real for every developer and every organization

https://azure.microsoft.com/blog/making-ai-real-for-every-developer-and-every-organization/

Share

24

Apr

Dear Spark developers: Welcome to Azure Cognitive Services

This post was co-authored by Mark Hamilton, Sudarshan Raghunathan, Chris Hoder, and the MMLSpark contributors.

Integrating the power of Azure Cognitive Services into your big data workflows on Apache Spark™

Today at Spark AI Summit 2019, we’re excited to introduce a new set of models in the SparkML ecosystem that make it easy to leverage the Azure Cognitive Services at terabyte scales. With only a few lines of code, developers can embed cognitive services within your existing distributed machine learning pipelines in Spark ML. Additionally, these contributions allow Spark users to chain or Pipeline services together with deep networks, gradient boosted trees, and any SparkML model and apply these hybrid models in elastic and serverless distributed systems.

From image recognition to object detection using speech recognition, translation, and text-to-speech, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario. To this date, more than a million developers have already discovered and tried Cognitive Services to accelerate breakthrough experiences in their application.

Azure Cognitive Services on Apache Spark™

Cognitive Services on Spark enable working with Azure’s Intelligent Services at massive scales with the Apache Spark™ distributed computing ecosystem. The Cognitive Services on

Share

15

Apr

QnA Maker updates – April 2019

We are excited to provide several updates for the QnA Maker service. To see previous releases for Conversational AI from Microsoft in March, see this post.

New Bot Framework v4 Template for QnA Maker

The QnA Maker service lets you easily create and manage a knowledge base from your data, including FAQ pages, support URLs, PDFs, and doc files. You can test and publish your knowledge base and then connect it to a bot using a bot framework sample or template. With this update we have simplified the bot creation process by allowing you to easily create a bot from your knowledge base, without the need for any code or settings changes. Find more details on creating a QnA bot on our tutorials page.

After you publish your knowledge base, you can create a bot from the publish page with the Create Bot button. If you have previously created bots, you can click on “View all” to see all the bots that are linked to your current subscription.

This will lead you to a create template in the Azure portal with all your knowledge base details pre-filled in. Your KB ID is connected to the template automatically, and your

Share

26

Mar

https://azure.microsoft.com/blog/new-updates-to-azure-ai-expand-ai-capabilities-for-developers/As companies increasingly look to transform their businesses with AI, we continue to add improvements to Azure AI to make it easy for developers and data scientists to deploy, manage, and secure AI functions directly into their applications with a READ MORE

Share

05

Mar

Conversational AI updates for March 2019

We are thrilled to share the release of Bot Framework SDK version 4.3 and use this opportunity to provide additional updates for the Conversational AI releases from Microsoft.

New LINE Channel

Microsoft Bot Framework lets you connect with your users wherever your users are. We offer thirteen supported channels, including popular messaging apps like Skype, Microsoft Teams, Slack, Facebook Messenger, Telegram, Kik, and others. We have listened to our developer community and addressed one of the most frequently requested features – added LINE as a new channel. LINE is a popular messaging app with hundreds of millions of users in Japan, Taiwan, Thailand, Indonesia, and other countries.

To enable your bot in the new channel, follow the “Connect a bot to LINE” instructions. You can also navigate to your bot in the Azure portal. Go to the Channels blade, click on the LINE icon, and follow the instructions there.

SDK 4.3

In the 4.3 release, the team focused on improving and simplifying message and activities handling. The Bot Framework Activity schema is the underlying schema used to define the interaction model for bots. With the 4.3 release, we have streamlined the handling of some activity types in the Bot

Share

28

Feb

Cognitive Services Speech SDK 1.3 – February update

Developers can now access the latest Cognitive Services Speech SDK which now supports:

Selection of the input microphone through the AudioConfig class Expanded support for Debian 9 Unity in C# (beta) Additional sample code

Read the updated Speech Services documentation to get started today.

What’s new

The Speech SDK supports a selection of the input microphone through the AudioConfig class, meaning you can stream audio data to the Speech Service from a non-default microphone. For more details see the documentation and the how-to guide on selecting an audio input device with the Speech SDK. This is not yet available from JavaScript.

The Speech SDK now also supports Unity in a beta version. Since this is new functionality, please provide feedback through the issue section in the GitHub sample repository. This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our Unity quickstart.

Samples

The following new content is available in our sample repository.

Samples for AudioConfig.FromMicrophoneInput. Python samples for intent recognition and translation. Samples for using the Connection object in iOS. Java samples for translation with audio output. New sample for use of the Batch

Share