It’s inspiring to see how customers continue to reimagine how they work with the help of AI, which is more important today than ever. Our customers are finding innovative ways to deliver crisis management solutions, drive cost-savings, redefine customer engagement, and accelerate decision-making.
Here are some notable examples we’ve recently seen:
Scaling crisis management
On the frontlines, first responders rely on Azure AI to scale their triage process to address the overwhelming number of people needing care and to ease volume in the system. For example, healthcare providers have created more than 1,400 bots using our Healthcare Bot service, helping more than 27 million people access critical healthcare information. The U.S. Centers for Disease Control and Prevention released a COVID-19 assessment bot that is powered by Azure Bot Service. Motorola Solutions uses Azure Bot Service, as well as speech and language services, in its own voice assistant for public safety, ViQi, to help 911 dispatchers and first responders focus on what matters most.
Azure AI is also helping customers optimize their operations to reduce costs. KPMG built a risk and fraud analytics solution using our speech and language services to streamline call center transcription and translation—cutting time,
https://azure.microsoft.com/blog/extending-the-power-of-azure-ai-to-microsoft-365-users/Today, Yusuf Mehdi, Corporate Vice President of Modern Life and Devices, announced the availability of new Microsoft 365 Personal and Family subscriptions. In his blog, he shared a few examples of how Microsoft 365 is innovating to deliver experiences powered READ MORE
We’re expanding the Microsoft Azure Stack Edge with NVIDIA T4 Tensor Core GPU preview during the GPU Technology Conference (GTC Digital). Azure Stack Edge is a cloud-managed appliance that brings Azure’s compute, storage, and machine learning capabilities to the edge for fast local analysis and insights. With the included NVIDIA GPU, you can bring hardware acceleration to a diverse set of machine learning (ML) workloads.
What’s new with Azure Stack Edge
At Mobile World Congress in November 2019, we announced a preview of the NVIDIA GPU version of Azure Stack Edge and we’ve seen incredible interest in the months that followed. Customers in industries including retail, manufacturing, and public safety are using Azure Stack Edge to bring Azure capabilities into the physical world and unlock scenarios such as the real-time processing of video powered by Azure Machine Learning.
These past few months, we’ve taken our customers’ feedback to make key improvements and are excited to make our preview available to even more customers today.
Azure Machine Learning: Build and train your model in the cloud, then deploy it to the edge for FPGA or
The world of supercomputing is evolving. Work once limited to high-performance computing (HPC) on-premises clusters and traditional HPC scenarios, is now being performed at the edge, on-premises, in the cloud, and everywhere in between. Whether it’s a manufacturer running advanced simulations, an energy company optimizing drilling through real-time well monitoring, an architecture firm providing professional virtual graphics workstations to employees who need to work remotely, or a financial services company using AI to navigate market risk, Microsoft’s collaboration with NVIDIA makes access to NVIDIA graphics processing units (GPU) platforms easier than ever.
These modern needs require advanced solutions that were traditionally limited to a few organizations because they were hard to scale and took a long time to deliver. Today, Microsoft Azure delivers HPC capabilities, a comprehensive AI platform, and the Azure Stack family of hybrid and edge offerings that directly address these challenges.
This year during GTC Digital, we’re spotlighting some of the most transformational applications powered by NVIDIA GPU acceleration that highlight our commitment to edge, on-prem, and cloud computing. Registration is free, so sign up to learn how Microsoft is powering transformation.
Visualization and GPU workstations
Azure enables a wide range of visualization workloads, which are critical
https://azure.microsoft.com/blog/new-features-for-form-recognizer-now-available/Extracting text and structure information from documents is a core enabling technology for robotic process automation and workflow automation. Since its preview release in May 2019, Azure Form Recognizer has attracted thousands of customers to extract text, key and value READ MORE
Your large archive of videos to index is ever-expanding, thus you have been evaluating Microsoft Video Indexer and decided that you want to take your relationship with it to the next level by scaling up.
In general, scaling shouldn’t be difficult, but when you first face such process you might not be sure what is the best way to do it. Questions like “are there any technological constraints I need to take into account?”, “Is there a smart and efficient way of doing it?”, and “can I prevent spending excess money in the process?” can cross your mind. So, here are six best practices of how to use Video Indexer at scale.
1. When uploading videos, prefer URL over sending the file as a byte array
Video Indexer does give you the choice to upload videos from URL or directly by sending the file as a byte array, but remember that the latter comes with some constraints.
First, it has file size limitations. The size of the byte array file is limited to 2 GB compared to the 30 GB upload size limitation while using URL.
Second and more importantly for your scaling, sending files using multi-part means high dependency
We are pleased to introduce the ability to export high-resolution keyframes from Azure Media Service’s Video Indexer. Whereas keyframes were previously exported in reduced resolution compared to the source video, high resolution keyframes extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video. This unlocks a wealth of pre-trained and custom model capabilities. You can use the keyframes extracted from Video Indexer, for example, to identify logos for monetization and brand safety needs, to add scene description for accessibility needs or to accurately identify very specific objects relevant for your organization, like identifying a type of car or a place.
Let’s look at some of the use cases we can enable with this new introduction.
Using keyframes to get image description automatically
You can automate the process of “captioning” different visual shots of your video through the image description model within Computer Vision, in order to make the content more accessible to people with visual impairments. This model provides multiple description suggestions along with confidence values for an image. You can take the descriptions
This post is co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.
In an age where low-latency and data security can be the lifeblood of an organization, containers make it possible for enterprises to meet these needs when harnessing artificial intelligence (AI).
Since introducing Azure Cognitive Services in containers this time last year, businesses across industries have unlocked new productivity gains and insights. The combination of both the most comprehensive set of domain-specific AI services in the market and containers enables enterprises to apply AI to more scenarios with Azure than with any other major cloud provider. Organizations ranging from healthcare to financial services have transformed their processes and customer experiences as a result.
These are some of the highlights from the past year:
Employing anomaly detection for predictive maintenance
Airbus Defense and Space, one of the world’s largest aerospace and defense companies, has tested Azure Cognitive Services in containers for developing a proof of concept in predictive maintenance. The company runs Anomaly Detector for immediately spotting unusual behavior in voltage levels to mitigate unexpected downtime. By employing advanced anomaly detection in containers without further burdening the data scientist team, Airbus can scale this critical capability across
Multi-language speech transcription was recently introduced into Microsoft Video Indexer at the International Broadcasters Conference (IBC). It is available as a preview capability and customers can already start experiencing it in our portal. More details on all our IBC2019 enhancements can be found here.
Multi-language videos are common media assets in the globalization context, global political summits, economic forums, and sport press conferences are examples of venues where speakers use their native language to convey their own statements. Those videos pose a unique challenge for companies that need to provide automatic transcription for video archives of large volumes. Automatic transcription technologies expect users to explicitly determine the video language in advance to convert speech to text. This manual step becomes a scalability obstacle when transcribing multi-language content as one would have to manually tag audio segments with the appropriate language.
Microsoft Video Indexer provides a unique capability of automatic spoken language identification for multi-language content. This solution allows users to easily transcribe multi-language content without going through tedious manual preparation steps before triggering it. By that, it can save anyone with large archive of videos both time and money, and enable discoverability and accessibility scenarios.
Multi-language audio transcription in Video
This post was co-authored by Tina Coll, Sr Product Marketing Manager, Azure Cognitive Services.
Innovate at no cost to you, with out-of-the box AI services that are newly available for Azure free account users. Join the 1.3 million developers who have been using Cognitive Services to build AI powered apps to date. With the broadest offering of AI services in the market, Azure Cognitive Services can unlock AI for more scenarios than other cloud providers. Give your apps, websites, and bots the ability to see, understand, and interpret people’s needs — all it takes is an API call — by using natural methods of communication. Businesses in various industries have transformed how they operate using the very same Cognitive Services now available to you with an Azure free account.
These examples are just a small handful of what you can make possible with these services:
Improve app security with face detection: With Face API, detect and compare human faces. See how Uber uses Face API to authenticate drivers. Automatically extract text and detect languages: Easily and accurately detect the language of any text string, simplifying development