We are thrilled to share the release of Bot Framework SDK version 4.3 and use this opportunity to provide additional updates for the Conversational AI releases from Microsoft.
New LINE Channel
Microsoft Bot Framework lets you connect with your users wherever your users are. We offer thirteen supported channels, including popular messaging apps like Skype, Microsoft Teams, Slack, Facebook Messenger, Telegram, Kik, and others. We have listened to our developer community and addressed one of the most frequently requested features – added LINE as a new channel. LINE is a popular messaging app with hundreds of millions of users in Japan, Taiwan, Thailand, Indonesia, and other countries.
To enable your bot in the new channel, follow the “Connect a bot to LINE” instructions. You can also navigate to your bot in the Azure portal. Go to the Channels blade, click on the LINE icon, and follow the instructions there.
In the 4.3 release, the team focused on improving and simplifying message and activities handling. The Bot Framework Activity schema is the underlying schema used to define the interaction model for bots. With the 4.3 release, we have streamlined the handling of some activity types in the Bot
Developers can now access the latest Cognitive Services Speech SDK which now supports:
Selection of the input microphone through the AudioConfig class Expanded support for Debian 9 Unity in C# (beta) Additional sample code
Read the updated Speech Services documentation to get started today.
The Speech SDK now also supports Unity in a beta version. Since this is new functionality, please provide feedback through the issue section in the GitHub sample repository. This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our Unity quickstart.
The following new content is available in our sample repository.
Samples for AudioConfig.FromMicrophoneInput. Python samples for intent recognition and translation. Samples for using the Connection object in iOS. Java samples for translation with audio output. New sample for use of the Batch
This blog was co-authored by Lei Zhang, Principal Research Manager, Computer Vision
You can now extract more insights and unlock new workflows from your images with the latest enhancements to Cognitive Services’ Computer Vision service.
1. Enrich insights with expanded tagging vocabulary
Computer Vision has more than doubled the types of objects, situations, and actions it can recognize per image.
2. Automate cropping with new object detection feature
Easily automate cropping and conduct basic counting of what you need from an image with the new object detection feature. Detect thousands of real life or man-made objects in images. Each object is now highlighted by a bounding box denoting its location in the image.
3. Monitor brand presence with new brand detection feature
You can now track logo placement of thousands of global brands from the consumer electronics, retail, manufacturing, entertainment industries.
With these enhancements, you can:
Do at-scale image and video-frame indexing, making your media content searchable. If you’re in media, entertainment, advertising, or stock photography, rich image and video metadata can unlock productivity for your business. Derive insights from social media and advertising campaigns by understanding the content of images and videos and
This blog post is co-authored by Emmanuel Bertrand, Senior Program Manager, Azure IoT.
We recently announced Azure Cognitive Services in containers for Computer Vision, Face, Text Analytics, and Language Understanding. You can read more about Azure Cognitive Services containers in this blog, “Brining AI to the edge.”
Today, we are happy to announce the support for running Azure Cognitive Services containers for Text Analytics and Language Understanding containers on edge devices with Azure IoT Edge. This means that all your workloads can be run locally where your data is being generated while keeping the simplicity of the cloud to manage them remotely, securely and at scale.
Whether you don’t have a reliable internet connection, or want to save on bandwidth cost, have super low latency requirements, or are dealing with sensitive data that needs to be analyzed on-site, Azure IoT Edge with the Cognitive Services containers gives you consistency with the cloud. This allows you to run your analysis on-site and a single pane of glass to operate all your sites.
These container images are directly available to try as IoT Edge modules on the Azure Marketplace:
Key Phrase Extraction extracts key talking points and highlights in text either
Video Indexer is an Azure service designed to extract deep insights from video and audio files offline. This is to analyze a given media file already created in advance. However, for some use cases it’s important to get the media insights from a live feed as quick as possible to unlock operational and other use cases pressed in time. For example, such rich metadata on a live stream could be used by content producers to automate TV production, like our example of EndemolShine Group, by journalists of a newsroom to search into live feeds, to build notification services based on content and more.
To that end, I joined forces with Victor Pikula a Cloud Solution Architect at Microsoft, in order to architect and build a solution that allows customers to use Video Indexer in near real-time resolutions on live feeds. The delay in indexing can be as low as four minutes using this solution, depending on the chunks of data being indexed, the input resolution, the type of content and the compute powered used for this process.
Figure 1 – Sample player displaying the Video Indexer metadata on the live stream
The stream analysis solution at hand, uses Azure
This post was co-authored by the QnA Maker Team.
With Microsoft Bot Framework, you can build chatbots and conversational applications in a variety of ways. Whether you’re looking to develop a bot from scratch with the open source Bot Framework, looking to create your own branded assistant with the Virtual Assistant solution accelerator, or looking to create a Q&A bot in minutes with QnA Maker. QnA Maker is an easy-to-use web-based service that makes it easy to power a question-answer application or chatbot from semi-structured content like FAQ documents and product manuals. With QnA Maker, developers can build, train, and publish question and answer bots in minutes.
Today, we are excited to reveal the launch of a highly requested feature, Active Learning in QnA Maker. Active Learning helps identify and recommend question variations for any question and allows you to add them to your knowledge base. Your knowledge base content won’t change unless you choose to add or edit the suggestions to the knowledge base.
How it works
Active Learning is triggered based on the scores of top N answers returned by QnA Maker for any given query. If the score differences lie within a small range, then the query
One of the most important considerations when choosing an AI service is security and regulatory compliance. Can you trust that the AI is being processed with the high standards and safeguards that you come to expect with hardened, durable software systems?
Cognitive Services today includes 14 generally available products. Below is an overview of current certifications in support of greater security and regulatory compliance for your business.
Added industry certifications and compliance
Significant progress has been made in meeting major security standards. In the past six months, Cognitive Services added 31 certifications across services and will continue to add more in 2019. With these certifications, hundreds of healthcare, manufacturing, and financial use cases are now supported.
The following certifications have been added:
ISO 20000-1:2011, ISO 27001:2013, ISO 27017:2015, ISO 27018:2014, and ISO 9001:2015 certification HIPAA BAA HITRUST CSF certification SOC 1 Type 2, SOC 2 Type 2, and SOC 3 attestation PCI DSS Level 1 attestation
For additional details on industry certifications and compliance for Cognitive Services, visit the Overview of Microsoft Azure Compliance page.
Enhanced data storage commitments
Cognitive Services now offers more assurances for where customer data is stored at rest. These assurances have been enabled by graduating
The year 2018 was a banner year for Azure AI as over a million Azure developers, customers, and partners engaged in the conversation on digital transformation. The next generation of AI capabilities are now infused across Microsoft products and services including AI capabilities for Power BI.
Here are the top 10 Azure AI highlights from 2018, across AI Services, tools and frameworks, and infrastructure at a glance:
3. Microsoft is first to enable Cognitive Services in containers.
4. Cognitive Search and basketball
AI tools and frameworks
7. Open Neural Network Exchange (ONNX) runtime is now open source.
10. Project Brainwave, integrated with AML.
With many exciting developments, why are these moments the highlight? Read on, as this blog begins to explain the importance of these moments.
These services span pre-built
Developers can now access the latest improvements to Cognitive Services Speech Service including a new Python API and more. Details below.
Read the updated Speech Services documentation to get started today.
Support for Ubuntu 18.04 is now available in addition to pre-existing support for Ubuntu 16.04.
New features by popular demand Lightweight SDK for greater performance
By reducing the number of required concurrent threads, mutexes, and locks, Speech Services now offers a more lightweight SDK with enhanced error reporting.
Control of server connectivity and connection status
A newly added connection object enables control over when the SDK connects to the Speech Service. You
This blog post was co-authored by Vishwac Sena Kannan, Principal Program Manager, FUSE Labs.
We are thrilled to present the release of Bot Framework SDK version 4.2 and we want to use this opportunity to provide additional updates on Conversational-AI releases from Microsoft.
In the SDK 4.2 release, the team focused on enhancing monitoring, telemetry, and analytics capabilities of the SDK by improving the integration with Azure App Insights. As with any release, we fixed a number of bugs, continued to improve Language Understanding (LUIS) and QnA integration, and enhanced our engineering practices. There were additional updates across the other areas like language, prompt and dialogs, and connectors and adapters. You can review all the changes that went into 4.2 in the detailed changelog. For more information, view the list of all closed issues.
Telemetry updates for SDK 4.2
With the SDK 4.2 release, we started improving the built-in monitoring, telemetry, and analytics capabilities provided by the SDK. Our goal is to provide developers with the ability to understand their overall bot-health, provide detailed reports about the bot’s conversation quality, as well as tools to understand where conversations fall short. To do that, we decided to further enhance the built-in