Organizations around the world are gearing up for a future powered by artificial intelligence (AI). From supply chain systems to genomics, and from predictive maintenance to autonomous systems, every aspect of the transformation is making use of AI. This raises a very important question: How are we making sure that the AI systems and models show the right ethical behavior and deliver results that can be explained and backed with data?
This week at Spark + AI Summit, we talked about Microsoft’s commitment to the advancement of AI and machine learning driven by principles that put people first.
Understand, protect, and control your machine learning solution
Over the past several years, machine learning has moved out of research labs and into the mainstream and has grown from a niche discipline for data scientists with PhDs to one where all developers are empowered to participate. With power comes responsibility. As the audience for machine learning expands, practitioners are increasingly asked to build AI systems that are easy to explain and that comply with privacy regulations.
Large enterprise customers running business-critical workloads on Azure manage thousands of subscriptions and use automation for deployment and management of their Azure resources. Expert support for these customers is critical in achieving success and operational health of their business. Today, customers can keep running their Azure solutions smoothly with self-help resources, such as diagnosing and solving problems in the Azure portal, and by creating support tickets to work directly with technical support engineers.
We have heard feedback from our customers and partners that automating support procedures is key to help them move faster in the cloud and focus on their core business. Integrating internal monitoring applications and websites with Azure support tickets has been one of their top asks. Customers expect to create, view, and manage support tickets without having to sign-in to the Azure portal. This gives them the flexibility to associate the issues they are tracking with the support tickets they raise with Microsoft. The ability to programmatically raise and manage support tickets when an issue occurs is a critical step for them in Azure usability.
We’re happy to share that the Azure Support API is now generally available. With this API, customers can integrate the creation and management of support tickets directly into their
This blog was co-authored by MacKenzie Olson, Program Manager, Azure Container Instances.
Today we’re excited about the first release of the new Docker Desktop integration with Microsoft Azure. Last month Microsoft and Docker announced this collaboration, and today you can experience it for yourself.
The new edge release of Docker Desktop provides an integration between Docker and Microsoft Azure that enables you to use native Docker commands to run your applications as serverless containers with Azure Container Instances.
You can use the Docker CLI to quickly and easily sign into Azure, create a Container Instances context using an Azure subscription and resource group, then run your single-container applications on Container Instances using docker run. You can also deploy multi-container applications to Container Instances that are defined in a Docker Compose file using docker compose up.
Code-to-Cloud with a serverless containers
Azure Container Instances is a great solution for running a single Docker container or an application comprised of multiple containers defined with a Docker Compose file. With Container Instances, you can run your containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. Because
We are excited to announce that Azure Load Balancer customers now have instant access to a packaged solution for health monitoring and configuration analysis. Built as part of Azure Monitor for Networks, customers now have topological maps for all their Load Balancer configurations and health dashboards for their Standard Load Balancers preconfigured with relevant metrics.
Through this, you have a window into the health and configuration of your networks, enabling rapid fault localization and informed design decisions. You can access this through the Insights blade of each Load Balancer resource and Azure Monitor for Networks, a central hub that provides access to health and connectivity monitoring for all your network resources.
Visualize functional dependencies
The functional dependency view will enable you to picture even the most complex load balancer setups. With visual feedback on Load Balancing rules, Inbound NAT rules, and backend pool resources, you can make updates while keeping a complete picture of your configuration in mind.
For Standard Load Balancers, your backend pool resources are color-coded with Health Probe status empowering you to visualize the current availability of your network to serve traffic. Alongside the above topology you are presented with a time-wise graph of health status,
With massive workforces now remote, the stress of IT admins and security professionals is compounded by the increased pressure to keep everyone productive and connected while combatting evolving threats. Now more than ever, organizations need to reduce costs, keep up with compliance requirements, all while managing risks in this constantly evolving landscape.
Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers and provides advanced threat protection across your hybrid workloads in the cloud, whether they’re in Azure or not, as well as on-premises.
Last week Ann Johnson, Corporate Vice President, Cybersecurity Solutions Group, shared news of an upcoming Azure Security Center virtual event—Stay Ahead of Attacks with Azure Security Center on June 30, 2020, from 10:00 AM to 11:00 AM Pacific Time. It’s a great opportunity to learn threat protection strategies from the Microsoft security community and to hear how your peers are tackling tough and evolving security challenges.
At the event, you’ll learn how to strengthen your cloud security posture and achieve deep and broad threat protection across cloud workloads—in Azure, on-premises, and in hybrid cloud. We will also talk about how to combine Security Center with Azure Sentinel
When you’re the company that builds the cloud platforms used by millions of people, your own cloud content needs be served up fast. Azure.com—a complex, cloud-based application that serves millions of people every day—is built entirely from Azure components and runs on Azure.
Microsoft culture has always been about using our own tools to run our business. Azure.com serves as an example of the convenient platform-as-a-service (PaaS) option that Azure provides for agile web development. We trust Azure to run Azure.com with 99.99-percent availability across a global network capable of a round-trip time (RTT) of less than 100 milliseconds per request.
In part two of our two-part series we share our blueprint, so you can learn from our experience building a website on planetary scale and move forward with your own website transformation.
This post will help you get a technical perspective on the infrastructure and resources that make up Azure.com. For details about our design principles, read Azure.com operates on Azure part 1: Design principles and best practices.
The architecture of a global footprint
With Azure.com, our goal is to run a world-class website in a cost-effective manner at planetary scale. To do this, we currently run more than
Azure puts powerful cloud computing tools into the hands of creative people around the world. So, when your website is the face of that brand, you better use what you build, and it better be good. As in, 99.99-percent composite SLA good.
That’s our job at Azure.com, the platform where Microsoft hopes to inspire people to invent the next great thing. Azure.com serves up content to millions of people every day. It reaches people in nearly every country and is localized in 27 languages. It does all this while running on the very tools it promotes.
In developing Azure.com, we practice what we preach. We follow the guiding principles that we advise our customers to adopt and the principles of sustainable software engineering (SSE). Even this blog post is hosted on the very infrastructure that it describes.
In part one of our two-part series, we will peek behind the Azure.com web page to show you how we think about running a major brand website on a global scale. We will share our design approach and best practices for security, resiliency, scalability, availability, environmental sustainability, and cost-effective operations—on a global scale.
Products, features, and demos supported on Azure.com
As a content
Today, we see a huge shift to remote work due to the global pandemic. Organizations around the world need to enable more of their employees to work remotely. We are working to address common infrastructure challenges businesses face when helping remote employees stay connected at scale.
A common operational challenge is to seamlessly connect remote users to on-premises resources. Even within Microsoft, we’ve seen our typical remote access of roughly 55,000 employees spike to as high as 128,000 employees while we’re working to protect our staff and communities during the global pandemic. Traditionally, you planned for increased user capacity, deployed additional on-premises connectivity resources, and had time to re-arrange routing infrastructure to meet organization transit connectivity and security requirements. Today’s dynamic environment demands rapid enablement of remote connectivity. Azure Virtual WAN supports multiple scenarios providing large scale connectivity and security in a few clicks.
Azure Virtual WAN provides network and security in a unified framework. Typically deployed with a hub and spoke topology, the Azure Virtual WAN architecture enables scenarios such as:
Branch connectivity via connectivity automation provided by Virtual WAN VPN/SD-WAN partners. IPsec VPN connectivity. Remote User VPN (Point-to-Site) connectivity. Private (ExpressRoute) connectivity. Intra cloud connectivity (transitive connectivity for
Securing any environment requires multiple lines of defense. Azure Container Registry recently announced the general availability of features like Azure Private Link, customer-managed keys, dedicated data-endpoints, and Azure Policy definitions. These features provide tools to secure Azure Container Registry as part of the container end-to-end workflow.
By default, when you store images and other artifacts in an Azure Container Registry, content is automatically encrypted at rest with Microsoft-managed keys.
Choosing Microsoft-managed keys means that Microsoft oversees managing the key’s lifecycle. Many organizations have stricter compliance needs, requiring ownership and management of the key’s lifecycle and access policies. In such cases, customers can choose customer-managed keys that are created and maintained in a customer’s Azure Key Vault instance. Since the keys are stored in Key Vault, customers can also closely monitor the access of these keys using the built-in diagnostics and audit logging capabilities in Key Vault. Customer-managed keys supplement the default encryption capability with an additional encryption layer using keys provided by customers. See details on how you can create a registry enabled for customer-managed keys.
Today we are announcing the general availability of the Rules Engine feature on both Azure Front Door and Azure Content Delivery Network (CDN). Rules Engine places the specific routing needs of your customers at the forefront of Azure’s global application delivery services, giving you more control in how you define and enforce what content gets served from where. Both services offer customers the ability to deliver content fast and securely using Azure’s best-in-class network. We have learned a lot from our customers during the preview and look forward to sharing the latest updates going into general availability.
How Rules Engine works
We recently talked about how we are building and evolving the architecture and design of Azure Front Door Rules Engine. The Rules Engine implementation for Content Delivery Network follows a similar design. However, rather than creating groups of rules in Rules Engine Configurations, all rules are created and applied to each Content Delivery Network endpoint. Content Delivery Network Rules Engine also boasts the concept of a global rule which acts as a default rule for each endpoint that always triggers its action.
General availability capabilities
Azure Front Door
The most important feedback we heard during the Azure Front Door