Category Archives : Cognitive Services

13

Nov

https://azure.microsoft.com/blog/getting-started-with-azure-cognitive-services-in-containers/Building solutions with machine learning often requires a data scientist. Azure Cognitive Services enable organizations to take advantage of AI with developers, without requiring a data scientist. We do this by taking the machine learning models and the pipelines and READ MORE

Share

13

Nov

Bringing AI to the edge
Bringing AI to the edge

We are seeing a clear trend towards a future powered by the intelligent cloud and intelligent edge. The intelligent cloud is ubiquitous computing at massive scale, enabled by the public cloud and powered by AI, for every type of application one can envision. The intelligent edge is a continually expanding set of connected systems and devices that gather and analyze data—close to end users and the data that is generated. Together, they give customers the ability to create a new class of distributed, connected applications that enable breakthrough business outcomes.

To accelerate this trend, today we are announcing the preview of Azure Cognitive Services containers, making it possible to build intelligent applications that span the cloud and the edge. Azure Cognitive Services allow developers to easily add cognitive features—such as object detection, vision recognition, and language understanding—into their applications without having direct AI or data science skills or knowledge. Over 1.2 million developers have discovered and tried Azure Cognitive Services to build and run intelligent applications. Containerization is an approach to software distribution in which an application or service is packaged so that it can be deployed in a container host with little or no modification.

With container support, customers

Share

07

Nov

Cognitive Services – Bing Local Business Search now available in public preview

We are excited to share that Bing Local Business Search API on Cognitive Services is now available in public preview. Bing Local Business Search API enables users to easily find local business information within your applications, given an area of interest. The public preview of Bing Local Business Search API enables scenarios such as calling, navigation, and mapping using contact details, latitude/longitude, and other entity metadata. This metadata comes from hundreds of categories including professionals and services, retail, healthcare, food and drink, and more. Additionally, user queries can be pertaining to a single entity, such as “Microsoft City Center Plaza Bellevue”, a collection of results like “Microsoft offices in Redmond, WA”, and category queries such as “Italian Restaurant” are also supported. Alternatively, users can use one of our predefined categories to query our API.

Below is an example of a JSON response. Each result item contains a name, full address, phone number, website, business category, and latitude/longitude. Using these results, you can build engaging user scenarios in your applications. For instance, you could enable users to find and contact a local business. Another example is to enable navigation to the place of interest or plotting results on Bing Maps. An

Share

23

Oct

Public preview: Named Entity Recognition in the Cognitive Services Text Analytics API

Today, we are happy to announce the public preview of Named Entity Recognition as part of the Text Analytics Cognitive Service. Named Entity Recognition (NER) is the ability to take free-form text and identify the occurrences of entities such as people, locations, organizations, and more. With just a simple API call, NER in Text Analytics uses robust machine learning models to find and categorize more than twenty types of named entities in any text documents.

Many organizations have messy piles of unstructured text in the form of customer feedback, enterprise documents, social media feeds, and more. However, it is challenging to understand what information these ever-growing stacks of documents contain. Text Analytics has long been helping customers make sense of these troves of text with capabilities such as Key Phrase Extraction, Sentiment Analysis, and Language Detection. Today’s announcement adds to this suite of powerful and easy-to-use natural language processing solutions that make it easy to tackle many problems.

Named Entity Recognition and Entity Linking

Building upon the Entity Linking feature that was announced at Build earlier this year, the new Entities API processes the text using both NER and Entity Linking capabilities. This makes it an extremely powerful solution for squeezing

Share

22

Oct

How developers can get started with building AI applications

This blog is co-authored by Wee Hyong Tok, Principal Data Scientist Manager, Office of the CTO AI.

In recent years, we have seen a leap in practical AI innovations catalyzed by vast amounts of data, the cloud, innovations in algorithms, hardware and more. So how do developers begin to design AI applications that engage and delight your customers, optimize operations, empower your employees, and transform products?

Using Azure Cognitive Services you can now infuse your applications, websites, and bots with intelligent capabilities. These capabilities build on years of research done on vision, speech, knowledge, search, and language. Using different cognitive services, developers can now easily add AI capabilities without training the machine learning models from scratch.

O’Reilly and Microsoft are excited to bring you a free e-book on AI, titled A Developer’s Guide to Building AI Applications. In this e-book, Anand Raman and Wee Hyong Tok of Microsoft provide a gentle introduction to use Azure AI for building intelligent, AI applications. They provide a practical example of a bot called “Conference Buddy”, that is used by conference attendees. The e-book walks through the use case, the architecture, and how to create the bot while infusing it with AI. The code

Share

09

Oct

Driving identity security in banking using biometric identification

Combining biometric identification with artificial intelligence (AI) enables banks to take a new approach to verifying the digital identity of their prospects and customers. Biometrics is the process by which a person’s unique physical and personal traits are detected and recorded by an electronic device or system as a means of confirm identity. Biometric identifiers are unique to individuals, so they are more reliable in confirming identity than token and knowledge-based methods, such as identity cards and passwords. Biometric identifiers are often categorized as physiological identifiers that are related to a person’s physicality and include fingerprint recognition, hand geometry, odor/scent, iris scans, DNA, palmprint, and facial recognition.

But how do you ensure the effectiveness of identifying a customer when they are not physically in the presence of the bank employee? As the world of banking continues to go digital, our identity is becoming the key to accessing these services. Regulators require banks to verify that users are who they say they are, not bad actors like fraudsters or known money launderers. And verifying identities online without seeing the person face to face is one of the biggest challenges online and mobile services face today.

It’s problematic because identity documents

Share

08

Oct

Snip Insights – Cross-platform open source AI tool for intelligent screen capture

Devices and technologies are moving forward at a rapid pace, though the everyday tools we use remain relatively unchanged. What if we could infuse AI into everyday tools to delight and inspire developers to do more using Microsoft AI platform? With just a little bit of creativity and using Microsoft’s current AI offerings, we can bring AI capabilities closer to customers and create applications that will inspire every organization, every developer, and every person on this planet.

Introducing Snip Insights

An open source cross-platform AI tool for intelligent screen capture. Snip Insights revolutionizes the way users can generate insights from screen captures. The initial prototype of Snip Insights, built for Windows OS and released at Microsoft Build 2018 in May, was created by Microsoft Garage interns based out of Vancouver, BC. Our team at Microsoft AI Lab in collaboration with the Microsoft AI CTO team took Snip Insights to the next level by giving the tool a new intuitive UX, cross-platform availability (MacOS, Linux, and Windows), and free download and usage under MSA license. Snip Insights leverages Microsoft Azure’s Cognitive Services APIs to increase users’ productivity by reducing the number of steps needed to gain intelligent insights.

The Solution

Share

24

Sep

Azure AI – Making AI real for business

AI, data and cloud are ushering the next wave of transformative innovations across industries. With Azure AI, our goal is to empower organizations to apply AI across the spectrum of their business to engage customers, empower employees, optimize operations and transform products. We see customers using Azure AI to derive tangible benefits across three key solution areas. 

First, using machine learning to build predictive models that optimize business processes. Second, building AI powered apps and agents to deliver natural user experience by integrating vision, speech and language capabilities into web and mobile apps.  Third, applying knowledge mining to uncover latent insights from documents.  

Today, at Microsoft Ignite, we are excited to announce a range of innovations across these areas to make Azure the best place for AI. Let me walk you through them.

Machine Learning 

From pre-trained models to powerful services to help you build your own models, Azure provides the most comprehensive machine learning platform.

To simplify development of speech, vision, and language machine learning solutions, we provide a powerful set of pre-trained models as part of Azure Cognitive Services.  When it comes to building your own deep learning models, in addition to supporting popular frameworks such as PyTorch

Share

24

Sep

Global scale AI with Azure Cognitive Services

To build an effective and scalable solution, developers need technology that can be deployed around the world and still provide results with high confidence. To that end, we’ve spent the last year investing in making our Cognitive Services enterprise-ready and bringing them to general availability, ready for production use. Cognitive Services are a set of intelligent APIs and services that are used by more than 1.2 million developers and thousands of businesses throughout 150 countries across every industry from retail to healthcare to public sector to manufacturing and non-profit organizations.

We’ve deployed more services into the Azure data centers around the world, written more documentation in multiple developer languages, re-architected products to change the way we store and retain data in order to give controls to users over their data, adhering to the highest standards available. We’ve localized our services into multiple languages across the globe with over 10 of them now available in 15 languages. All while meeting strict SLA standards that we require for every Azure service. And we’re not stopping there, our work continues.

Just recently we’ve refactored our speech services and are launching a single unified speech service accessible via one endpoint to enable high speed

Share

23

Aug

Speech Services August 2018 update
Speech Services August 2018 update

We are pleased to announce the release of another update to the Cognitive Services Speech SDK (version 0.6.0). With this release, we have added the support for Java on Windows 10 (x64) and Linux (x64). We are also extending the support for .NET Standard 2.0 to the Linux platform. The changes are highlighted in the table below. The sample section of the SDK has been updated with samples showcasing the use of the newly supported languages. The UWP support was added in the Speech SDK version 0.5.0 release; and starting from now the UWP apps built with the Speech SDK can be published to the Microsoft Store.

We also included several bug fixes which were reported by early adopters. Most notable this should fix errors in long-running speech transcriptions, as well as reducing the amount of in-use socket connections and threads.

Other functional changes, breaking changes and bug fixes can be found in the Speech SDK’s release notes. For questions regarding Speech SDK and Speech Services, please visit our support page.

There are also changes that impact the Speech Devices SDK. To provide a little bit of the background, the Speech Devices SDK is for our devices solution. It

Share