Smart Manufacturing envisions a future where factory equipment can make autonomous decisions based on what’s happening on the factory floor. Businesses can more easily integrate all steps of the manufacturing process including design, manufacturing, supply chain and operation. This facilitates greater flexibility and reactivity when participating in competitive markets. Enabling this vision requires a combination of related technologies such as IoT, AI/machine learning, and Edge Computing. In this article, we will introduce Edge Computing and discuss its role in enabling Smart Manufacturing.
What is Edge Computing?
Put simply, Edge Computing is about taking code that runs in the cloud and running it on local devices or close to it. Like in a gateway device or a PC sitting next to the device.
To understand Edge Computing it helps to think of an IoT solution as generally having three components:
Things like IoT devices, which generate sensor data. Insights you extract from this data. Actions you perform based on these insights to deliver some sort of value.
With Edge Computing, you move the insights and actions components from the cloud to the device. In other words, you bring some of the code used to process and extract insights from the data,
The confluence of cloud, data, and AI is foundational to innovation and is driving unprecedented change. This week at Spark + AI Summit, I talked about how Microsoft enables organizations to take advantage of Azure to build advanced machine learning models and intelligent applications virtually anywhere.
As Satya mentioned during our Build conference last month, applications will increasingly require a ubiquitous computing fabric from the cloud to the edge. These applications also require new machine learning and AI capabilities that enable them to see, hear and predict. The driving force behind these capabilities is data. Data is vital to every app and experience we build today. Organizations are using their data to extract important insights to drive their businesses forward and engage their customers in new ways. Customers like Renault-Nissan are revolutionizing their customer experience with connected cars. Rockwell Automation, a leader in industrial automation has built predictive maintenance capabilities on their equipment to save time and reduce cost associated with device failure. Liebherr, a leader in manufacturing, produces intelligent refrigerators that use object recognition to recommend grocery lists based on refrigerator contents. These are just a few examples of customers leveraging their data, wherever it exists, to turn it
This week Mapbox announced it will integrate its Vision SDK with the Microsoft Azure IoT platform, enabling developers to build innovative applications and solutions for smart cities, the automotive industry, public safety, and more. This is an important moment in the evolution of map creation. The Mapbox Vision SDK provides artificial intelligence (AI) capabilities for identifying objects through semantic segmentation – a technique of machine learning using computer vision that classifies what things are through a camera lens. Semantic segmentation on the edge for maps means objects such as stop signs, crosswalks, speed limits signs, people, bicycles, and other moving objects can be identified at run time through a camera running AI under the covers. These classifications are largely referred to as HD (high definition) maps.
HD maps are more machine friendly as an input to autonomous vehicles. Once the HD map objects are classified, and because other sensors like GPS and accelerometer are onboard, the location of these objects can be registered and placed onto a map, or in the advancement of “living maps,” registered into the map at run time. This is an important concept and where edge computing intersects with location to streamline the digitization of our
Last week, the Microsoft Build conference brought developers lots of innovation and was action packed with in-depth sessions. During the event, my discussions in the halls ranged from containers to dev tools, IoT to Azure Cosmos DB, and of course, AI. The pace of innovation available to developers is amazing. And, in case there was simply too much for you to digest, I wanted to pull together some key highlights and top sessions to watch, starting with a great video playlist with highlights from the keynotes.
Empowering developers through the best tools
Build is for devs, and all innovation in our industry starts with code! So, let’s start with dev tools. Day one of Build marked the introduction of .NET Core 2.1 release candidate. .NET Core 2.1 improves on previous releases with performance gains and many new features. Check out all the details in the release blog and this great session from Build showing what you can use today:
.NET Overview & Roadmap: In this session, Scott Hanselman and Scott Hunter talked about all things .NET, including new .NET Core 2.1 features made available at Build.
Scott Hanselman and Scott Hunter sharing new .NET Core 2.1.
With AI being top
Microsoft will have a major presence at Spark + AI Summit, 2018, in San Francisco, the premier event for the Apache Spark community. Rohan Kumar, Corporate Vice President of Azure Data, will deliver a keynote on how Azure Databricks combines the best of Apache® Spark™ analytics platform and Microsoft Azure Data Services to help customers unleash the power of data and reimagine possibilities that will improve our world.
Azure Databricks, a fast, easy, and collaborative Apache Spark-based analytics platform optimized for Azure, was made generally available in March 2018. To learn more about the announcement, read Rohan Kumar’s blog about how Azure Databricks can help customers accelerate innovation and simplify the process of building Big Data & AI solutions. At Spark + AI Summit, we have a number of sessions showcasing the great work our customers and partners are doing and how Azure Databricks is helping them achieve productivity at scale.
Sign up for training on Spark!
On Monday, June 4, 2018 there are a number of full-day training courses on Apache Spark ranging from beginner to advanced that will enhance your skill set and even prepare you for certification on Spark.
Apache Spark essentials
This 1-day course is for
Creating an advanced conversational system is now a simple task with the powerful tools integrated into Microsoft’s Language Understanding Service (LUIS) and Bot Framework. LUIS brings together cutting-edge speech, machine translation, and text analytics on the most enterprise-ready platform for creation of conversational systems. In addition to these features, LUIS is currently GDPR, HIPPA, and ISO compliant enabling it to deliver exceptional service across global markets.
Talk or text?
Bots and conversational AI systems are quickly becoming a ubiquitous technology enabling natural interactions with users. Speech remains one of the most widely used input forms that come natural when thinking of conversational systems. This requires the integration of speech recognition within the Language Understanding in conversational systems. Individually, speech recognition and language understanding are amongst the most difficult problems in cognitive computing. Introducing the context of Language Understanding improves the quality of speech recognition. Through intent-based speech priming, the context of an utterances is interpreted using the language model to cross-fertilize the performance of both speech recognition and language understanding. Intent based speech recognition priming uses the utterances and entity tags in your LUIS models to improve accuracy and relevance while converting audio to text. Incorrectly recognized spoken phrases or
Developers and media companies trust and rely on Azure Media Services to build the ability to encode, protect, analyze and deliver video at scale. This week, at the Build 2018 conference in Seattle, we are proud to announce a major new API version for Azure Media Services, along with new developer focused features, and updates to Video Indexer.
Media processing at scale: Public preview of the new Azure Media Services API (v3)
Starting at Build 2018, developers can begin working with the public preview of the new Azure Media Services API (v3). The new API provides a simplified development model, enables a better integration experience with key Azure services like Event Grid and Functions, includes two new media analysis capabilities, and provides a new set of SDKs for .NET, .NET Core, Java, Go, Python, and Node.js!
We have created a set of preliminary documentation to get developers started quickly learning more about the new Azure Media Services preview release announcements.
Get Started with v3 Public Preview: REST API, SDKs, Swagger Files. Code Samples used at the Build 2018 session. Learn more about.. How the new Transform template makes it easier to submit encoding and analysis Jobs. How to use the
Artificial Intelligence (AI) has emerged as one of the most powerful forces in the digital transformation. At Microsoft, we believe developers, data scientists and enterprises should have easy access to the power of AI so they can build systems that augment human ingenuity in unique and differentiated ways. Today, at Microsoft Build 2018, as we engage in conversations about digital transformation with over a million developers, customers and partners, I am pleased to share some of our latest and most exciting innovations in the Azure AI Platform.
The Azure AI Platform consists of three major sets of capabilities:
1. AI Services (Figure 1): These span pre-built AI capabilities such as Azure Cognitive Services and Cognitive Search (Azure Search + integrated Cognitive Services), Conversational AI with Azure Bot Service, and custom AI development with Azure Machine Learning (AML).
Figure1: AI Services in Azure
Cognitive Services are cloud hosted
Today at the Microsoft Build developer conference, we are announcing a partnership with Qualcomm, one of the largest mobile and IoT chipset manufacturers in the world, to jointly create a vision AI developer kit. This will empower Qualcomm’s latest AI hardware accelerators to deliver real-time AI on devices without the need for constant connectivity to the cloud or expensive machines.
This vision AI developer kit brings all the key hardware and software required to develop camera-based IoT solutions using Azure IoT Edge and Azure Machine Learning (ML) – helping innovators deliver the next generation of AI-enabled robotics, industrial safety, retail, home and enterprise security cameras, smart home devices and more. This is a crucial step toward enabling developers to easily create, manage and monitor AI on the edge.
This partnership allows developers to start building AI offerings with prebuilt solutions — including customizable models — or create new AI models and deploy directly to the cloud or to the new hardware accelerated devices. They can do so using the same powerful IoT Edge platform they have been using to manage other IoT devices and edge deployments — use a single pane of glass to manage all their AI assets across
This morning, at the Microsoft Build conference in Seattle, I talked about the key areas of new Azure innovation that enable the intelligent cloud and intelligent edge – spanning developer tools, DevOps, containers, serverless, Internet of Things (IoT) and artificial intelligence (AI).
Innovation starts with developers writing code. The effectiveness of your dev tools are at the heart of your ideas becoming reality. With this in mind, we continue to deliver new innovation and experiences with Visual Studio tools. Whether it is Visual Studio, VS Code or Visual Studio Team Services for DevOps, we are committed to providing the most productive developer experience end-to-end. Today, we announced a preview of Visual Studio IntelliCode, that brings AI to everyday development by providing intelligent suggestions that improve code quality and productivity. We also announced the preview of Live Share, which lets developers collaborate on their code and problem solve across Visual Studio and VS Code. Finally, building on our shared commitment to developers and open source, we also announced a fantastic partnership with GitHub where Visual Studio App Center will be natively available in GitHub via their marketplace. This means any GitHub developer building mobile apps for iOS, Android, Windows and macOS