Video Indexer is an Azure service designed to extract deep insights from video and audio files offline. This is to analyze a given media file already created in advance. However, for some use cases it’s important to get the media insights from a live feed as quick as possible to unlock operational and other use cases pressed in time. For example, such rich metadata on a live stream could be used by content producers to automate TV production, like our example of EndemolShine Group, by journalists of a newsroom to search into live feeds, to build notification services based on content and more.
To that end, I joined forces with Victor Pikula a Cloud Solution Architect at Microsoft, in order to architect and build a solution that allows customers to use Video Indexer in near real-time resolutions on live feeds. The delay in indexing can be as low as four minutes using this solution, depending on the chunks of data being indexed, the input resolution, the type of content and the compute powered used for this process.
Figure 1 – Sample player displaying the Video Indexer metadata on the live stream
The stream analysis solution at hand, uses Azure
Promotional planning and demand forecasting are incredibly complex processes. Take something seemingly straight-forward, like planning the weekly flyer, and there are thousands of questions involving a multitude of teams just to decide what products to promote, and where to position the inventory to maximize sell-through. For example:
What products do I promote? How do I feature these items in a store? (Planogram: end cap, shelf talkers, signage etc.) What pricing mechanic do I use? (% off, BOGO, multi-buy, $ off, loyalty offer, basket offer) How do the products I’m promoting contribute to my overall sales plan? How do the products I’m promoting interact with each other? (halo and cannibalization) I have 5,000 stores, how much inventory of each promoted item should I stock at each store?
If the planning is not successful, the repercussions can hurt a business:
Stockouts directly result in lost revenue opportunities, through lost product sales. This could be a result of customers who simply purchase the desired item from another retailer—or a different brand of the item. Overstock results in costly markdowns and shrinkage (spoilage) that impacts margin. The opportunity cost of holding non-productive inventory in-store also hurts the merchant. And if inventory freshness is a
This blog post was co-authored by Jürgen Weichenberger, Chief Data Scientist, Accenture and Mathew Salvaris, Senior Data Scientist, Microsoft
Drilling for oil and gas is one of the most dangerous jobs on Earth. Workers are exposed to the risk of events ranging from small equipment malfunctions to entire off shore rigs catching on fire. Fortunately, the application of deep learning in predictive asset maintenance can help prevent natural and human made catastrophes.
We have more information than ever on our equipment thanks to sensors and IoT devices, but we are still working on ways to process the data so it is valuable for preventing these catastrophic events. That’s where deep learning comes in. Data from multiple sources can be used to train a predictive model that helps oil and gas companies predict imminent disasters, enabling them to follow a proactive approach.
Using the PyTorch deep learning framework on Microsoft Azure, Accenture helped a major oil and gas company implement such a predictive asset maintenance solution. This solution will go a long way in protecting their staff and the environment.
What is predictive asset maintenance?
Predictive asset maintenance is a core element of the digital transformation of chemical plants. It
In the last several years we’ve seen fundamental transformation in healthcare data management, but the biggest, and perhaps most important shift, has been in how healthcare organizations think about cloud technology and their most sensitive health data. Healthcare leaders have transitioned from asking “Why should I manage healthcare data in the cloud?” and are now asking “How?”.
The change in the question may seem subtle, but the rigor required to ensure the highest level of privacy, security, and management of Protected Health Information (PHI) in the cloud has been a barrier to entry for much of the healthcare ecosystem. Compounding the difficulty is the state of data: multiple datasets, fragmented sources of truth, inconsistent formats, and exponential growth of data types.
We are now seeing, almost daily, new breakthroughs with applied machine learning on health data. But to truly apply machine learning at scale in the healthcare industry, we must ensure a secure and trusted pathway to manage that data in the cloud. Moving data into the cloud in its current state can reduce cost, but cost isn’t the only measure. Healthcare leaders are thinking about how they bring their data into the cloud while increasing opportunities to use and
Today we announced the general availability of the Microsoft Healthcare Bot in the Azure Marketplace. The Microsoft Healthcare Bot is a cloud service that powers conversational AI for healthcare. It’s designed to empower healthcare organizations to build and deploy compliant, AI-powered virtual health assistants and chatbots that help them put more information in the hands of their users, enable self-service, drive better outcomes, and reduce costs.
The Healthcare Bot service has several unique aspects:
Out-of-the-box healthcare intelligence including language models to understand healthcare intents and medical terminology, as well as content from credible providers with information about conditions, symptoms, doctors, medications, and even a symptom checker. Customization and extensibility, which allows partners to introduce their own business flows, and securely connect to their own backend systems over HL7 FHIR or REST APIs. The service model allows our partners to focus on the important things like their key business needs and their own flows. Security and compliance with industry standards, such as ISO 27001, ISO 27018, HIPAA, Cloud Security Alliance (CSA) Gold, and GDPR which we consider as table stakes in this industry. We also provide tools and out-of-the-box functionality that help our partners create secure and compliant solutions.
As a modern developer, you may be eager to build your own deep learning models but aren’t quite sure where to start. If this is you, I recommend you take a look at the deep learning course from fast.ai. This new fast.ai course helps software developers start building their own state-of-the-art deep learning models. Developers who complete this fast.ai course will become proficient in deep learning techniques in multiple domains including computer vision, natural language processing, recommender algorithms, and tabular data.
You’ll also want to learn about Microsoft’s Azure Data Science Virtual Machine (DSVM). Azure DSVM empowers developers like you with the tools you need to be productive with this fast.ai course today on Azure, with virtually no setup required. Using fast cloud-based GPU virtual machines (VMs), at the most competitive rates, Azure DSVM saves you time that would otherwise be spent in installation, configuration, and waiting for deep learning models to train.
Here is how you can effectively run the fast.ai course examples on Azure.
Running the fast.ai deep learning course on Azure DSVM
While there are several ways in which you can use Azure for your deep learning course, one of the easiest ways is to leverage
As part of our ongoing commitment to open and interoperable artificial intelligence, Microsoft has joined the SciKit-learn consortium as a platinum member and released tools to enable increased usage of SciKit-learn pipelines.
Initially launched in 2007 by members of the Python scientific community, SciKit-learn has attracted a large community of active developers who have turned it into a first class, open source library used by many companies and individuals around the world for scenarios ranging from fraud detection to process optimization. Following SciKit-learn’s remarkable success, the SciKit-learn consortium was launched in September 2018 by Inria, the French national institute for research in computer science, to foster growth and sustainability of the library, employing central contributors to maintain high standards and develop new features. We are extremely supportive of what the SciKit-learn community has accomplished so far and want to see it continue to thrive and expand. By joining the newly formed SciKit-learn consortium, we will support central contributors to ensure that SciKit-learn remains a high-quality project while also tackling new features in conjunction with the fabulous community of users and developers.
In addition to supporting SciKit-learn development, we are committed to helping Scikit-learn users in training and production scenarios through
This blog post was authored by Peter Cooper, Senior Product Manager, Microsoft IoT.
From smart factories and smart cities to virtual personal assistants and self-driving cars, artificial intelligence (AI) and the Internet of Things (IoT) are transforming how people around the world live, work, and play.
But fundamentally changing the ways people, devices, and data interact is not simple or easy work. Microsoft’s AI & IoT Insider Labs was created to help all types of organizations accelerate their digital transformation. Member organizations around the world get access to support both technology development and product commercialization, for everything from hardware design to manufacturing to building applications and turning data into insights using machine learning.
Here’s how AI & IoT Insider Labs is helping one partner, SunCulture, leverage new technology to provide solar-powered water pumping and irrigation systems for smallholder farmers in Kenya.
Kenyan smallholdings face some of the most challenging growing conditions in the world. 97 percent rely on natural rainfall to support their crops and livestock—and the families that depend on them. But just 17 percent of the country’s farmland is suitable for rainfed agriculture. Electricity is unavailable in most places and diesel power is
One of the most important considerations when choosing an AI service is security and regulatory compliance. Can you trust that the AI is being processed with the high standards and safeguards that you come to expect with hardened, durable software systems?
Cognitive Services today includes 14 generally available products. Below is an overview of current certifications in support of greater security and regulatory compliance for your business.
Added industry certifications and compliance
Significant progress has been made in meeting major security standards. In the past six months, Cognitive Services added 31 certifications across services and will continue to add more in 2019. With these certifications, hundreds of healthcare, manufacturing, and financial use cases are now supported.
The following certifications have been added:
ISO 20000-1:2011, ISO 27001:2013, ISO 27017:2015, ISO 27018:2014, and ISO 9001:2015 certification HIPAA BAA HITRUST CSF certification SOC 1 Type 2, SOC 2 Type 2, and SOC 3 attestation PCI DSS Level 1 attestation
For additional details on industry certifications and compliance for Cognitive Services, visit the Overview of Microsoft Azure Compliance page.
Enhanced data storage commitments
Cognitive Services now offers more assurances for where customer data is stored at rest. These assurances have been enabled by graduating
Storytelling is at the heart of human nature. We were storytellers long before we were able to write, we shared our values and created our societies mostly through oral storytelling. Then, we managed to find the way to record and share our stories, and certainly more advanced ways to broadly share our stories; from Gutenberg’s printing press to television, and the internet. Writing stories is not easy, especially if one must write a story just by looking at a picture in different literary genres.
Natural Language Processing (NLP) is a field that is driving a revolution in the computer-human interaction. We have seen the amazing accuracy we have today with computer vision, but we wanted to see if we could create a more natural and cohesive narrative showcasing NLP. We developed Pix2Story a neural-storyteller web application on Azure that allows users to upload a picture and get a machine-generated story based on several literature genres. We based our work on several papers “Skip-Thought Vectors,” “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention,” “Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books,” and some repositories neural storyteller. The idea is to obtain the