This post is authored by Gopi Kumar, Principal Program Manager, and Paul Shealy, Senior Software Engineer at Microsoft.
With the rise of Artificial Intelligence, the need to rapidly train a large number of data scientists and AI developers has never been more urgent. Microsoft is always looking for efficient ways to educate employees and customers on AI and make them more productive when using these new capabilities. Aside from numerous technical conferences that we host and sponsor, we also offer the AI School and a range of tools such as the Data Science Virtual Machine, Visual Studio Tools for AI, Azure Machine Learning, Microsoft ML Server, and Batch AI to help developers and data scientists become more productive around building their intelligent AI-infused apps.
Pulling together deep learning workshops for a large number of students, however, can be a time consuming, error prone, and costly exercise. Furthermore, technical issues with the environment setup and compatibility problems during the workshops impede learning and cause student dissatisfaction. These workshops typically have participants bring their laptops and have them download and install new software. However, with the wide range of laptop platforms (Windows, Mac, Linux), numerous configurations, and version conflicts with existing software, workshops
As we ring in the new year, we’d like to kick things off in our usual fashion – with a quick recap of our most popular posts from the year just concluded. So here are our “Top 10” posts from 2017, sorted in increasing order of readership – enjoy!
Lung cancer – which is the leading cancer when it comes to mortality in both women and men in the US – suffers from a low rate of early diagnosis. The Data Science Bowl competition aimed to help by having participants use machine learning to determine whether CT scans of the lung have cancerous lesions or not. Success in the competition required that data scientists get started quickly and iterate rapidly. Through this post, we showed how to compute features of scanned images with a pre-trained Convolutional Neural Network (CNN), and use these features to classify scans as cancerous or not using a boosted tree – all within one hour.
Traditionally, developers would build rules-based engines
Four recent Microsoft posts about AI developments, just in case you missed it.
1. Getting Started with Microsoft AI – MSDN Article
This MSDN article, co-authored by Joseph Sirosh and Wee Hyong Tok, provides a nice summary of all the capabilities offered by the Microsoft AI platform and how you can get started today. From Cognitive Services that help you to build intelligent apps, to customizing state-of-the-state computer vision deep learning models, to building deep learning models of your own with Azure Machine Learning, the Microsoft AI platform is open, flexible and provides developers the right tools that are the best suited for their wide range of scenarios and skills levels. Click here or on the image below to read the original article, on MSDN.
2. Announcing ONNX 1.0 – An Open Ecosystem for AI
Microsoft firmly believes in bringing AI advances to all developers, on any platform, using any language, and with an open AI ecosystem that helps us ensure that the fruits of AI are broadly accessible. In December, we announced that Open Neural Network Exchange (ONNX), an open source model representation for interoperability and innovation in the AI ecosystem co-developed by Microsoft, is production-ready. The ONNX
We recently concluded the Fall 2018 edition of the Machine Learning, AI & Data Science (MLADS) conference, Microsoft’s largest internal gathering of employees focused specifically on these areas. This latest edition was the eighth in a popular series that we launched back in 2014. Over 3,500 employees tuned into the sold-out conference, both in person in Redmond and over livestream throughout the world, and thousands more will tune into MLADS session recordings over coming weeks and months.
As application of AI and ML explode both within Microsoft and in our external products and services, the growth of our community interest groups catering to these areas has been very rapid. The MLADS conference itself is unique in that it is almost entirely driven by enthusiastic community volunteers – a band of employees unified in its passion for AI and ML, and a desire to network and learn from one another. The “call for content” that goes out for this conference series routinely gets several hundreds of submissions, and our volunteer team helps triage these submissions and curate the best ones for our event.
The fall 2018 conference featured over 95 talk sessions, 20 tutorials and 65 poster/demo sessions covering a gamut of
Re-posted from the Microsoft Azure blog.
Conversational AI, or making human and computer interactions more natural, has been a goal of computer scientists for a long time. In support of that longstanding quest, we are excited to announce the general availability of two key Microsoft Azure services that streamline the creation of interactive conversational bots, namely the Azure Bot Service and the Language Understanding Intelligent Service (LUIS).
The Azure Bot Service helps developers create conversational interfaces on multiple channels. LUIS helps developers create customized natural interactions on any platform for any type of application, including bots. With these two services now generally available on Azure, developers can easily build custom models that naturally interpret the intentions of users who converse with their bots.
We are also introducing new capabilities in each service. Azure Bot Service is now available in more regions, offers premium channels to communicate better with users, and provides advanced customization capabilities. LUIS now has an updated user interface, is available in more regions as well, and helps developers create substantially richer conversational experiences in their apps. More detailed information about the new features of Azure Bot Service and LUIS can be obtained here.
LUIS, in fact, is
This post is authored by Vani Mandava, Director of Data Science at Microsoft Research.
The AI revolution is poised to unleash unprecedented innovation and impact on our society. Several research and development groups across Microsoft have hit their stride in delivering world-changing impact through the power of AI. Working together, we are creating a comprehensive Microsoft AI platform and a set of AI services that will enable the next generation of intelligent applications that will augment human intelligence.
The AI buzz has been impossible to miss at numerous conferences that Microsoft has participated in during the past year. AI was pervasive at the massive HPC conference, SC17, traditionally a supercomputing conference swarmed by the scientific computing community. Attendees spent an entire day during the Cloud Computing for Science and Engineering tutorial learning how to build parallel and scalable scientific applications in the cloud using Jupyter notebooks through the Data Science Virtual Machine. Attendees also got an opportunity to test-drive the new Azure Machine Learning tools, which provide a powerhouse of open source -based free tools to build AI enabled applications.
In this context, we are thrilled to announce an exciting new Cloud AI Challenge and would like to invite the
This post is authored by Erika Menezes, Software Engineer at Microsoft.
Using deep learning to learn feature representations from near-raw input has been shown to outperform traditional task-specific feature engineering in multiple domains in several situations, including in object recognition, speech recognition and text classification. With the recent advancements in neural networks, deep learning has been gaining popularity in computational creativity tasks such as music generation. There has been great progress in this field via projects such as Magenta, an open-source project focused on creating machine learning projects for art and music, from the Google Brain team, and Flow Machines, who have released an entire AI generated pop album. For those of you who are curious about music generation, you can find additional resources here.
This goal of our work is to provide data scientists who are new to the field of music generation guidance on how to create deep learning models for music generation. As a sample, here is music that was generated by training an LSTM model.
In this post, we show you how to build a deep learning model for simple music generation using the Azure Machine Learning (AML) Workbench for experimentation.
Here are the
Re-posted from the Microsoft Research blog.
The thirty-first annual conference on Neural Information Processing Systems (NIPS) starts on Monday next week, and is being held in Long Beach, CA, from December 4th through 9th, 2017.
The event, which is completely sold out, is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers.
Microsoft has always had a strong presence at NIPS and this year is no different. We have several employees taking part at the event, including as organizing committee members, workshop and symposium organizers, invited speakers and more. Our researchers and engineers have co-authored dozens of accepted papers, contributed to several posters and are also involved in key workshops that are a part of the event.
For complete details, including the list of accepted papers, posters and workshops from our team, as well as links to ML-related job opportunities at Microsoft, be sure to check out the original post here.
We look forward to meeting several of you at NIPS Long Beach next week!
ML Blog Team
This post is authored by Daisy Deng, Software Engineer, and Abhinav Mithal, Senior Engineering Manager, at Microsoft.
The focus on machine learning and artificial intelligence has soared over the past few years, even as fast, scalable and reliable ML and AI solutions are increasingly viewed as being vital to business success. H2O.ai has lately been gaining fame in the AI world for its fast in-memory ML algorithms and for easy consumption in production. H2O.ai is designed to provide a fast, scalable, and open source ML platform and it recently added support for deep learning as well. There are many ways to run H2O.ai on Azure. This post provides an overview of how to efficiently develop and operationalize H2O.ai ML models on Azure.
H2O.ai can be deployed in many ways including on a single node, on a multi-node cluster, in a Hadoop cluster and an Apache Spark cluster. H2o.ai is written in Java, so it naturally supports Java APIs. Since the standard Scala backend is a Java VM, H2O.ai also supports the Scala API. It also has rich interfaces for Python and R. The h2o R and h2o Python packages respectively help R and Python users access H2O.ai algorithms and functionality.
This post is authored by Barnam Bora, Program Manager in the Cloud AI group at Microsoft.
Microsoft’s Data Science Virtual Machines (DSVM) and Deep Learning Virtual Machines (DLVM) are a family of popular VM images in Windows Server and Linux flavors that are published on the Azure Marketplace. They have a curated but broad set of pre-configured machine learning and data science tools including pre-loaded samples. DSVM and DLVM are configured and tested to work seamlessly with a plethora of services available on the Microsoft Azure cloud, and they enable a wide array of data analytics scenarios that are being used by many organizations across the globe.
We recently hosted a webinar covering the workflow of building ML and AI -powered solutions in Azure using DSVM, DLVM and related services such as Azure Batch AI and Azure Machine Learning Model Management. The webinar video is available from the link below (requires registration with Microsoft) and more information about the webinar are in the sections that follow.
Scenarios Covered in the Webinar
Single GPU Node AI Model Training
DSVM and DLVM are great tools to develop, test and deploy AI models and solutions. Data scientists