We are excited to share the winners of the first Microsoft Azure AI Hackathon, hosted on Devpost. Developers of all backgrounds and skill levels were welcome to join and submit any form of AI project, whether using Azure AI to enhance existing apps with pre-trained machine learning (ML) models or by building ML models from scratch. Over 900 participants joined in, and 69 projects were submitted. A big thank you to all who participated and many congratulations to the winners.
Submitted by Nathan Glover and Stephen Mott, Trashé is a SmartBin that aims to help people make more informed recycling decisions. What I enjoyed most was watching the full demo of Trashé in action! It’s powerful when you see not just the intelligence, but the end-to-end scenario of how it can be applied in a real-world environment.
This team used many Azure services to connect the hardware, intelligence, and presentation layers—you can see this is a well-researched architecture that is reusable in multiple scenarios. Azure Custom Vision was a great choice in this case, enabling the team create a well performing model with very little training data. The more we recycle, the better the model will get.
Who spends their summer at the Microsoft Garage New England Research & Development Center (or “NERD”)? The Microsoft Garage internship seeks out students who are hungry to learn, not afraid to try new things, and able to step out of their comfort zones when faced with ambiguous situations. The program brought together Grace Hsu from Massachusetts Institute of Technology, Christopher Bunn from Northeastern University, Joseph Lai from Boston University, and Ashley Hong from Carnegie Mellon University. They chose the Garage internship because of the product focus—getting to see the whole development cycle from ideation to shipping—and learning how to be customer obsessed.
Microsoft Garage interns take on experimental projects in order to build their creativity and product development skills through hacking new technology. Typically, these projects are proposals that come from our internal product groups at Microsoft, but when Stanley Black & Decker asked if Microsoft could apply image recognition for asset management on construction sites, this team of four interns accepted the challenge of creating a working prototype in twelve weeks.
Starting with a simple request for leveraging image recognition, the team conducted market analysis and user research to ensure the product would stand out and prove useful.
Today, Alysa Taylor, Corporate Vice President of Business Applications and Industry, announced several new AI-driven insights applications for Microsoft Dynamics 365.
Powered by Azure AI, these tightly integrated AI capabilities will empower every employee in an organization to make AI real for their business today. Millions of developers and data scientists around the world are already using Azure AI to build innovative applications and machine learning models for their organizations. Now business users will also be able to directly harness the power of Azure AI in their line of business applications.
What is Azure AI?
Azure AI is a set of AI services built on Microsoft’s breakthrough innovation from decades of world-class research in vision, speech, language processing, and custom machine learning. What I find particularly exciting is that Azure AI provides our customers with access to the same proven AI capabilities that power Xbox, HoloLens, Bing, and Office 365.
Azure AI helps organizations:
Develop machine learning models that can help with scenarios such as demand forecasting, recommendations, or fraud detection using Azure Machine Learning. Incorporate vision, speech, and language understanding capabilities into AI applications and bots, with Azure Cognitive Services and Azure Bot Service. Build knowledge-mining solutions to make
This post was co-authored by the extended Microsoft Connected Vehicle Platform (MCVP) team.
A connected vehicle solution must enable a fleet of potentially millions of vehicles, distributed around the world, to deliver intuitive experiences including infotainment, entertainment, productivity, driver safety, driver assistance. In addition to these services in the vehicle, a connected vehicle solution is critical for fleet solutions like ride and car sharing as well as phone apps that incorporate the context of the user and the journey.
Imagine you are driving to your vacation destination and you start your conference call from home while you are packing. When you transition to the shared vehicle, the route planning takes into account the best route for connectivity and easy driving and adjusts the microphone sensitivity during the call in the back seat. These experiences today are constrained to either the center-stack screen, known as the in-vehicle infotainment device (IVI), or other specific hardware and software that is determined when the car is being built. Instead, these experiences should evolve over the lifetime of ridership. The opportunity is for new, modern experiences in vehicles that span the entire interior and systems of a vehicle, plus experiences outside the vehicle, to create
https://azure.microsoft.com/blog/microsoft-and-qualcomm-accelerate-ai-with-vision-ai-developer-kit/Artificial intelligence (AI) workloads include megabytes of data and potentially billions of calculations. With advancements in hardware, it is now possible to run time-sensitive AI workloads on the edge while also sending outputs to the cloud for downstream applications. AI READ MORE
Congratulations to the PyTorch community on the release of PyTorch 1.2! Last fall, as part of our dedication to open source AI, we made PyTorch one of the primary, fully supported training frameworks on Azure. PyTorch is supported across many of our AI platform services and our developers participate in the PyTorch community, contributing key improvements to the code base. Today we would like to share the many ways you can use PyTorch 1.2 on Azure and highlight some of the contributions we’ve made to help customers take their PyTorch models from training to production.
PyTorch 1.2 on Azure
Getting started with PyTorch on Azure is easy and a great way to train and deploy your PyTorch models. We’ve integrated PyTorch 1.2 in the following Azure services so you can utilize the latest features:
Azure Machine Learning service – Azure Machine Learning streamlines the building, training, and deployment of machine learning models. Azure Machine Learning’s Python SDK has a dedicated PyTorch estimator that makes it easy to run PyTorch training scripts on any compute target you choose, whether it’s your local machine, a single virtual machine (VM) in Azure, or a GPU cluster in Azure. Learn how to train Pytorch
At Build, we highlighted a few customers who are building conversational experiences using the Bot Framework to transform their customer experiences. For example, BMW discussed its work on the BMW Intelligent Personal Assistant to deliver conversational experiences across multiple canvases by leveraging the Bot Framework and Cognitive Services. LaLiga built their own virtual assistant which allows fans to experience and interact with LaLiga across multiple platforms.
With the Bot Framework release in July, we are happy to share new releases of Bot Framework SDK 4.5 and preview of 4.6, updates to our developer tools, and new channels in Azure Bot Service. We’ll use the opportunity to provide additional updates for the Conversational AI releases from Microsoft.
Bot Framework channels
We continue to expend channels support and functionality for Bot Framework and Azure Bot Service.
Voice-first bot applications: Direct Line Speech preview
The Microsoft Bot Framework lets you connect with your users wherever your users are. We offer thirteen supported channels, including popular messaging apps like Skype, Microsoft Teams, Slack, Facebook Messenger, Telegram, Kik, as well as a growing number of community adapters.
Today, we are happy to share the preview of Direct Line Speech channel. This is a new channel
With over 360,000 registered Azure Bot Service developers, we’ve seen significant growth in bots and virtual assistants built on Azure. A major trend we’re following is the growing need for these assistants to support voice-first conversational experiences. As a result, we’re taking steps to make it even easier for developers to build virtual assistants with our virtual assistant solution accelerator and to add speech to their conversational applications with Azure Bot Service.
At this year’s Microsoft Build conference, we announced signup availability of the Direct Line Speech channel, which simplifies the creation of end-to-end solutions for voice-first conversational experiences. Today, we’re happy to share that the Direct Line Speech channel is now in preview for any developer with no additional signup or approval required. With this release, the Direct Line Speech channel has also significantly expanded its region support to enable faster and more reliable conversational experiences worldwide.
About Direct Line Speech
Direct Line Speech is a new channel that simplifies the creation of end-to-end solutions for voice-in and voice-out natural user interfaces with a few key components:
An on-device API, available as part of the Speech SDK, simplifies speech and real-time supplementary signal communication to and from a
This post was co-authored by Hadas Bitran, Group Manager, Microsoft Healthcare Israel.
Every day, healthcare organizations are beginning their digital transformation journey with the Microsoft Healthcare Bot Service built on Azure. The Healthcare Bot service empowers healthcare organizations to build and deploy an Artificial Intelligence (AI) powered, compliant, conversational healthcare experience at scale. The service combines built-in medical intelligence with natural language capabilities, extensibility tools, and compliance constructs, allowing healthcare organizations such as providers, payers, pharma, HMOs, and telehealth to give people access to trusted and relevant healthcare services and information.
Healthcare organizations can leverage the Healthcare Bot Service on their digital transformation journey today, as we announced in our blog Microsoft Healthcare Bot brings conversational AI to healthcare. That’s why we are so happy to share more information on the Healthcare Bot Service partner program. Our Healthcare Bot certified partners empower healthcare organizations to successfully deploy virtual assistants on the Microsoft Healthcare Bot service. Working with an official partner, healthcare organizations can achieve the full potential of the Microsoft Healthcare Bot by leveraging the expertise and experience of partners who understand the business needs and challenges in healthcare.
This new program is open to existing Microsoft partners that support
Through integration with Cognitive Services APIs, Azure Search has long had the ability to extract text and structure from images and unstructured content. Until recently, this capability was used exclusively in full text search scenarios, exemplified in demos like the JFK files which analyzes diverse content in JPEGs and makes it available for online search. The journey from visual unstructured content, to searchable structured content is enabled by a feature called cognitive search. This capability in Azure Search is now extended with the addition of a knowledge store that saves enrichments for further exploration and analysis beyond search itself.
The knowledge store feature of Azure Search, available in preview, refers to a persistence layer in cognitive search that describes a physical expression of documents created through AI enrichments. Enriched documents are projected into tables or hierarchical JSON, which you can explore using any client app that is able to access Azure Storage. In Azure Search itself, you define the physical expression or shape of the projections in the knowledge store settings within your skillset.
Customers are using a knowledge store (preview) in diverse ways, such as to validate the structure and accuracy of enrichments, generate training data for AI models,