SIGGRAPH is back in Los Angeles and so is Microsoft Azure! I hope you can join us at Booth #1351 to hear from leading customers and innovative partners.
Teradici, Bebop, Support Partners, Blender, and more will be there to showcase the latest in cloud-based rendering and media workflows:
See a real-time demonstration of Teradici’s PCoIP Workstation Access Software, showcasing how it enables a world-class end-user experience for graphics-accelerated applications on Azure’s NVIDIA GPUs. Experience a live demonstration of industry-standard visual effects, animation, and other post-production tools on the BeBop platform. It is the leading solution for cloud-based media and entertainment workflows, creativity, and collaboration. Learn more about how cloud-integrator Support Partners enables companies to run complex and exciting hybrid workflows in Azure. Be the first to hear about Azure’s integration with Blender’s render manager Flamenco and how users can easily deploy a completely virtual render farm and file server. The Azure Flamenco Manager will be freely available on GitHub, and we can’t wait to hear how it is being used and get your feedback.
We’re also demonstrating how you can simplify the creation and management of hybrid cloud rendering environments, get the most of your on-prem investments while bursting to
Video Indexer (VI), the AI service for Azure Media Services enables the customization of language models by allowing customers to upload examples of sentences or words belonging to the vocabulary of their specific use case. Since speech recognition can sometimes be tricky, VI enables you to train and adapt the models for your specific domain. Harnessing this capability allows organizations to improve the accuracy of the Video Indexer generated transcriptions in their accounts.
Over the past few months, we have worked on a series of enhancements to make this customization process even more effective and easy to accomplish. Enhancements include automatically capturing any transcript edits done manually or via API as well as allowing customers to add closed caption files to further train their custom language models.
The idea behind these additions is to create a feedback loop where organizations begin with a base out-of-the-box language model and improve its accuracy gradually through manual edits and other resources over a period of time, resulting with a model that is fine-tuned to their needs with minimal effort.
Accounts’ custom language models and all the enhancements this blog shares are private and are not shared between accounts.
In the following sections I
Putting the intelligent cloud to work for content creators, owners and storytellers.
Stories entertain us, make us laugh and cry, and are the lens through which we perceive our world. In that world, increasingly overloaded with information, they catch our attention and, if they catch our hearts, we engage. This makes stories powerful, and it’s why so many large technology companies are investing heavily in content – creating it and selling it.
At Microsoft, we’re not in the business of content creation.
Why? Our mission is to help every person and organization on the planet achieve more. So instead of creating or owning content, we want to provide platforms to help content creators and owners achieve more – from the Intelligent Cloud to the Intelligent Edge, with industry leading artificial intelligence (AI). We’re excited to see that mission come to life through customers such as Endemol Shine, Multichoice, RTL, Ericsson and partners like Avid, Akamai, Haivision, Pipeline FX and Verizon Digital Media Services. And we are excited to announce new Azure rendering, Azure Media Services, Video Indexer and Azure Networking capabilities to help you achieve more at NAB Show 2019. Cue scene.
Fix it in post: higher resolution, less
After sweeping up multiple awards with the general availability release of Azure Media Services’ Video Indexer, including the 2018 IABM for innovation in content management and the prestigious Peter Wayne award, the team has remained focused on building a wealth of new features and models to allow any organization with a large archive of media content to unlock insights from their content; and use those insights improve searchability, enable new user scenarios and accessibility, and open new monetization opportunities.
At NAB Show 2019, we are proud to announce a wealth of new enhancements to Video indexer’s models and experiences, including:
A new AI-based editor that allows you to create new content from existing media within minutes Enhancements to our custom people recognition, including central management of models and the ability to train models from images Language model training based on transcript edits, allowing you to effectively improve your language model to include your industry-specific terms New scene segmentation model (preview) New ending rolling credits detection models Availability in 9 different regions worldwide ISO 27001, ISO 27018, SOC 1,2,3, HiTRUST, FedRAMP, HIPAA, and PCI certifications Ability to take your data and trained models with you when moving from trial to paid
Want to train Video Indexer to recognize people relevant specifically to your account? We have great news for you!
Face detection and recognition are both very widely used insights that Video Indexer provides. The face recognition feature includes the ability to recognize around 1M celebrity faces out of the box and to train account level custom Person models to recognize non-celebrity people who are relevant to a customer’s specific organization. We received multiple requests from customers to further enhance the capabilities of custom Person models. Today, we are happy to announce a wealth of enhancements that makes custom Person model training and management faster and easier.
These enhancements include a centralized custom Person model management page that allows you to create multiple models in your account. Each of these models can hold up to 1M different people. From this page, you can create new models and add new people to existing models. Here, you can also review, rename, and delete your models if needed. On top of that, you can now train your account to identify people based on images of people’s faces even before you upload any video to your account (public preview). For instance, organizations that already have
Video Indexer is an Azure service designed to extract deep insights from video and audio files offline. This is to analyze a given media file already created in advance. However, for some use cases it’s important to get the media insights from a live feed as quick as possible to unlock operational and other use cases pressed in time. For example, such rich metadata on a live stream could be used by content producers to automate TV production, like our example of EndemolShine Group, by journalists of a newsroom to search into live feeds, to build notification services based on content and more.
To that end, I joined forces with Victor Pikula a Cloud Solution Architect at Microsoft, in order to architect and build a solution that allows customers to use Video Indexer in near real-time resolutions on live feeds. The delay in indexing can be as low as four minutes using this solution, depending on the chunks of data being indexed, the input resolution, the type of content and the compute powered used for this process.
Figure 1 – Sample player displaying the Video Indexer metadata on the live stream
The stream analysis solution at hand, uses Azure
https://azure.microsoft.com/blog/what-s-new-in-azure-media-services-video-processing/Developers and media companies trust and rely on Azure Media Services for the ability to encode, protect, index, and deliver videos at scale. This week we are proud to announce several enhancements to Media Services including the general availability of READ MORE
We are excited to announce that Microsoft’s offering within Azure CDN is now generally available. Azure CDN from Microsoft provides Azure customers the ability to deliver content from Microsoft’s own global CDN network. This native CDN option was added alongside existing provider options from Verizon and Akamai in May of this year and is now ready for you to use with full SLAs, faster create and change speeds, more features such as bring-your-own-certificate and regional caching, and more locations.
Cloud services require reliability, scale, agility and performance. Azure CDN delivers an easy to setup and use CDN platform to distribute your videos, files, web sites or other HTTP content to the world. With CDN services from Verizon, Akamai and now Microsoft, Azure CDN is built from the ground up to deliver best in class CDN services through our multi-CDN ecosystem all inside Azure’s flexible cloud service.
Azure CDN from Microsoft adds to this already rich portfolio of CDN services a Microsoft owned and operated CDN service, running at the Edge of Microsoft’s Global Network. This highly seasoned anycast-based CDN platform provides direct, private access to content in Azure from each CDN Edge point of presence (POP).
Figure 1. Microsoft
Content creators and broadcasters are increasingly embracing Cloud’s global reach, hybrid model and elastic scale. These attributes combined with AI’s ability to accelerate insights and time to market across content creation, management, and monetization are truly transformative.
At the International Broadcasters Conference (IBC) Show 2018, we are focused on bringing Cloud + AI together to help you overcome common media workflow challenges.
Video Indexer, generally available starting today, is a great example of this Cloud + AI focus. It brings together the power of the cloud + Microsoft AI to intelligently analyze your media assets, extract insights and add metadata. It makes it easier to understand your vast content library and get the more than 20 new and improved models, easy to use interfaces, a single API, and simplified account management. I have been part of Video Indexer team since its inception and could not be more excited to see it reach GA. I’m also incredibly proud of the work the team has done to solve real customer problems and make AI tangible in this easy to use elegant solution.
Our partners are already innovating on top of Video Indexer and extending Azure Media Services to advance the state of
Media and entertainment industry conferences are by far some of my favorites. Creativity, disruption, opportunity, and technology – particularly cloud, edge, and AI – are everywhere. It’s been exciting to see those things come together at NAB 2018, SIGGRAPH, and now IBC Show 2018. Together with teams from across Microsoft, I’m looking forward to IBC Show and the chance to learn, collaborate, and advance the state of this dynamic industry.
At this year’s IBC we’re excited to announce the general availability of Video Indexer, our advanced metadata extraction service. Announced as public preview earlier this year, Video Indexer provides a rich set of cross-channel (audio, speech, and visual) learning models. Check out Sudheer’s blog for more information on all the new capabilities including emotion detection, topic inferencing, and improvements to the ever-popular celebrity recognition model that recognizes over one million faces.
Video Indexer is just one of the ways Azure is helping customers like Endemol Shine, Multichoice, RTL, and Ericsson with their content needs. At IBC 2018, our teams are excited to share new ways that Azure, together with solutions from our partners, can address common media workflow challenges.
How? Well, read on…
More visual effects and animations mean