Matei Zaharia, Chief Technologist at Databricks & Assistant Professor of Computer Science at Stanford University, in conversation with Joseph Sirosh, Chief Technology Officer of Artificial Intelligence in Microsoft’s Worldwide Commercial Business
At Microsoft, we are privileged to work with individuals whose ideas are blazing a trail, transforming entire businesses through the power of the cloud, big data and artificial intelligence. Our new “Pioneers in AI” series features insights from such pathbreakers. Join us as we dive into these innovators’ ideas and the solutions they are bringing to market. See how your own organization and customers can benefit from their solutions and insights.
Our first guest in the series, Matei Zaharia, started the Apache Spark project during his PhD at the University of California, Berkeley, in 2009. His research was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in Computer Science. He is a co-founder of Databricks, which offers a Unified Analytics Platform powered by Apache Spark. Databricks’ mission is to accelerate innovation by unifying data science, engineering and business. Microsoft has partnered with Databricks to bring you Azure Databricks, a fast, easy, and collaborative Apache Spark based analytics platform optimized for Azure.
At Microsoft, we are privileged to work with individuals whose ideas are blazing a trail, transforming entire businesses through the power of the cloud, big data and artificial intelligence. Our new Pioneers in AI series features insights from such pathbreakers. Join us as we dive into these innovators’ ideas and the solutions they are bringing to market. See how your own organization and customers can benefit from their solutions and insights.
All Episodes 1. Data Center Scale Computing and Artificial Intelligence with Matei Zaharia, Inventor of Apache Spark
Matei Zaharia is a co-founder and Chief Technologist at Databricks, an Assistant Professor of Computer Science at Stanford and the inventor of Apache Spark. Microsoft has partnered with Databricks to bring you Azure Databricks, a Spark-based analytics platform optimized for Azure offering simple setup, streamlined workflows and ease of collaboration between data scientists, engineers and business analysts. In a conversation with Joseph Sirosh, CTO for AI at Microsoft, Matei shares his thoughts about Spark, machine learning and interesting AI applications he’s encountered lately.
There are over 1 million new amputees every year, i.e. one every 30 seconds – a truly shocking statistic.
The World Health Organization estimates that between 30 to 100 million people around the world are living with limb loss today. Unfortunately, only 5-15% of this population has access to prosthetic devices.
Although prostheses have been around since ancient times, their successful use has been severely limited for millennia by several factors, with cost being the major one. Although it is possible to get sophisticated bionic arms today, the cost of such devices runs into tens of thousands of dollars. These devices are just not widely available today. What’s more, having these devices interface satisfactorily with the human body has been a massive issue, partly due to the challenges of working with the human nervous system. Such devices generally need to be tailored to work with each individual’s nervous system, a process that often requires expensive surgery.
Is it possible for a new generation of human beings to finally help
Re-posted from the Azure blog channel
In an earlier post, we explored how several top teams at this year’s Imagine Cup had Artificial Intelligence (AI) at the core of their winning solutions. From helping farmers identify and manage crop disease to helping the hearing-impaired, this year’s finalists tackled difficult problems that affect people from all walks of life.
In a new post, we take a closer look at the champion project of Imagine Cup 2018, smartARM.
See how the unexpected combination of a 3D-printed prosthetic arm, a camera embedded in its palm, cloud connectivity and easy access to state-of-the-art AI algorithms allowed let a team of undergraduate students from Canada to accomplish something rather remarkable.
Learn more by clicking the image below or the link that follows.
We are just scratching the surface in terms of the types of medical and healthcare breakthroughs that may result from the application of AI.
As Joseph Sirosh, Corporate Vice President and CTO of AI at Microsoft rightly puts it, “Imagine a future where all assistive devices are infused with AI and designed to work with you. That could have tremendous positive impact on the quality of people’s lives, all over the world.”
This post is authored by Nile Wilson, Software Engineer Intern at Microsoft.
Imagine Cup 2018 winning teams: smartARM (first place, front and center),
iCry2Talk (second place, attired in pink), and Mediated Ear (third place, at the right).
Every year, Microsoft hosts the Imagine Cup, a global competition bringing together creative, bright, and motivated students to develop technologies that will shape how we live, work, and play. This year, tens of thousands of students from across the world registered for the competition, but only 49 teams were selected to compete in the World Finals. In addition to the first, second and third place winners, this year’s competition also awarded the top projects in Artificial Intelligence (AI), Big Data, and Mixed Reality.
Of the 49 finalists, team smartARM won the competition with their innovative, inexpensive, AI-enabled prosthetic hand. The team was comprised of Samin Khan from the University of Toronto and Hamayal Choudhry from the University of Ontario Institute of Technology. Although smartARM took home the top prize, all teams in the finals impressed the judges with their creativity and drive to have a positive impact on the world.
One other thing most all the winning teams had in common
This post is authored by Tara Shankar Jana, Senior Technical Product Marketing Manager at Microsoft.
The user interface design process involves lots of creativity and iteration. The process often starts with drawings on a whiteboard or a blank sheet of paper, with designers and engineers sharing ideas and trying their best to represent the underlying customer scenario or workflow. Once a candidate design is arrived at, it’s usually captured via a photograph and then translated manually into a working HTML wireframe that works in a web browser. Such translation takes time and effort and it often slows down the design process.
What if the design could instead be captured from a whiteboard and be instantly reflected in a browser? If we could do that, at the end of a design brainstorm session we would have a readymade prototype that’s already been validated by the designer, developer and perhaps even the customer.
Introducing Sketch2Code – a web based solution that uses AI to transform a picture of a hand-drawn user interface into working HTML code.
Let’s take a closer look at the process of transforming hand-drawn images into HTML using Sketch2Code:
The user first uploads an image using our
This post is authored by Tara Shankar Jana, Senior Technical Product Marketing Manager at Microsoft.
All of us have creative ideas – ideas that can improve our lives and the lives of thousands, perhaps even millions of others. But how often do we act on turning those ideas into a reality? Most of the time, we do not believe in our ideas strongly enough to pursue them. Other times we feel like we lack a platform to build out our idea or showcase it. Most good ideas don’t go beyond those initial creative thoughts in our head.
If you’re a professional working in the field of artificial intelligence (AI), or an aspiring AI developer or just someone who is passionate about AI and machine learning, Microsoft is excited to offer you an opportunity to transform your most creative ideas into reality. Join the Microsoft AI Idea Challenge Contest today for a chance to win exciting prizes and get your project featured in Microsoft’s AI.lab showcase. Check out the rules, terms and conditions of the contest and then dive right in!
The Microsoft AI Idea Challenge is seeking breakthrough AI solutions from developers, data scientists, professionals and
This post is authored by Chenhui Hu, Data Scientist at Microsoft.
Deep learning has achieved great success in many areas recently. It has attained state-of-the-art performance in applications ranging from image classification and speech recognition to time series forecasting. The key success factors of deep learning are – big volumes of data, flexible models and ever-growing computing power.
With the increase in the number of parameters and training data, it is observed that the performance of deep learning can be improved dramatically. However, when models and training data get big, they may not fit in the memory of a single CPU or GPU machine, and thus model training could become slow. One of the approaches to handle this challenge is to use large-scale clusters of machines to distribute the training of deep neural networks (DNNs). This technique enables a seamless integration of scalable data processing with deep learning. Other approach like using multiple GPUs on a single machine works well with modest data but could be inefficient for big data.
Although the term “distributed deep learning” may sound scary if you’re hearing it for the first time, through this blog post, I show how you can quickly write scripts to
This post is authored by Wilson Lee, Senior Software Engineer at Microsoft.
Imagine yourself at a conference, attending a session with hundreds of other enthusiastic developers. You put away your phone to pay attention to the speaker who is talking about the latest cutting-edge technologies. As you learn about the topic, you start to gather a list of questions you would like to get the answers to right away. But the timing never seems right to ask those questions. Maybe, it’s not Q&A time yet. Maybe, you are not thrilled to speak up in front of so many fellow attendees. Or maybe, even if you raised your hand or stood in line during the Q&A period, you were not picked. As a result, you did not have the full learning experience that you felt you deserved.
Even as digital transformation sweeps through every business and industry, we can’t help but ask – can AI help in the conference experience above? Can today’s manual interaction between speaker and attendees be infused with intelligence to create a more satisfying Q&A experience?
The core of Q&A is a conversation between the speaker and attendees, and – as conversational AI tools gain rapid popularity
This post is co-authored by Erika Menezes, Software Engineer at Microsoft, and Chaitanya Kanitkar, Software Engineer at Twitter. This project was completed as part of the coursework for Stanford’s CS231n in Spring 2018.
Ever seen someone wearing an interesting outfit and wonder where you could buy it yourself?
You’re not alone – retailers world over are trying to capitalize on something very similar. Each time a fashion blogger posts a picture on Instagram or another photo-sharing site, it’s a low-cost sales opportunity. As online shopping and photo-sharing become ever more widely used, the use of user generated content (UGC) in marketing strategies has become pivotal in driving traffic and increasing sales for retailers. A key value proposition for UGC content such as images and videos is their authenticity when compared to professional content. However, this is also why working with UGC content can be more difficult as there is much less control over how the content looks or how it was generated.
Microsoft has been using deep learning for e-commerce visual search and inventory management using content-based image retrieval. Both efforts demonstrate solutions for the in-shop clothes retrieval task, where the query image and target catalog image are taken