We are pleased to share the capability to rewrite HTTP headers in Azure Application Gateway. With this, you can add, remove, or update HTTP request and response headers while the request and response packets move between the client and backend application. You can also add conditions to ensure that the headers you specify are rewritten only when the conditions are met. The capability also supports several server variables which help store additional information about the requests and responses, thereby enabling you to make powerful rewrite rules.
Figure 1: Application Gateway removing the port information from the X-Forwarded-For header in the request and modifying the Location header in the response.
Rewriting the headers helps you accomplish several important scenarios. Some of the common use cases are mentioned below.
Remove port information from the X-Forwarded-For header
Application gateway inserts X-Forwarded-For header to all requests before it forwards the requests to the backend. The format of this header is a comma-separated list of IP:Port. However, there may be scenarios where the backend applications require the header to contain only the IP addresses. One such scenario is when the backend application is a Content Management System (CMS) because most CMS are not able
Every internet facing web application, whether serving a large audience or a small set of users in a single region, is by default a global application. Whether you are running a large news website with millions of users across the globe, running a B2B application for managing your sales channels or a local pastry shop in a city – your users are distributed/roaming across multiple locations, or your application demands deployment into multiple locations for high availability or disaster recovery scenarios. As a global application, your distributed users and/or application deployments place demands on you to maximize performance for your end users and ensure the application is always-on across failures and attacks.
Today I am excited to announce the general availability of Azure Front Door Service (AFD) which we launched in preview last year – a scalable and secure entry point for fast delivery of your global applications. AFD is your one stop solution for your global website/application and provides:
Application and API acceleration with anycast and using Microsoft’s massive private global network to directly connect to your Azure deployed backends means your app runs with lower latency and higher throughput to your end users. Global HTTP load balancing enables
Azure Front Door, ExpressRoute Direct and Global Reach now generally available
Today I’m excited to announce the availability of innovative and industry leading Azure services that will help the attendees of NAB realize their future vision to deliver for their audiences – Azure Front Door Service (AFD), ExpressRoute Direct and Global Reach, as well as some cool new additions to both AFD and our Content Delivery Network (CDN).
This coming week, Microsoft will be at NAB Show 2019 in Las Vegas, bringing together an industry centered centered on the ablity to deliver richer content experiences or audienes around the word. The media and entertainment industry will gather together for an in-depth view of the current, as well as the future of media technology and innovation, showcasing new and innovative cloud services to optimize and scale rich content experiences.
Bringing the media industry to the cloud has a tremendous impact on the entire content workflow; from production, post, delivery and IT operations, cloud services enable companies to scale their ability to innovate, create, and bring more content to market. This transformation however starts somewhere else; it starts with the most critical piece, which is the users or consumers of services.
You have a great web application, and users from all over the world love it. Well, so do malicious attackers. Cyber-attacks grow each year in frequency and sophistication, and being unprotected against them exposes you to the risks of service interruptions, data loss, and tarnished reputation.
We have heard from many of you that security is a top priority when moving web applications onto the cloud. Today, we are very excited to announce our public preview of the Web Application Firewall (WAF) for the Azure Front Door service. By combining the global application and content delivery network with natively integrated WAF engine, we now offer a highly available platform helping you deliver your web applications to the world, secure and fast!
WAF with Front Door service leverages the scale of and the deep security investments we have made at the Azure edge, and it is designed to protect you from multiple attack vectors such as injection type attacks and volumetric DoS attacks. It inspects each incoming request at Azure’s network edge, stops unwanted traffic before they enter your backend servers, and offers protection at scale without sacrificing on performance. With WAF for Front Door, you have the option to fine
Azure Virtual Network (VNet) is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. Read more about VNet’s key benefits in our documentation, “What is Azure Virtual Network?”
With VNets, you can connect your network in multiple ways. You can connect to on-premises using Point-to-Site (P2S), Site-to-Site (S2S) gateways or ExpressRoute gateways. You can also connect to other VNets directly using VNet peering.
Customer network can be expanded by peering Virtual Networks to one another. Traffic sent over VNet peering is completely private and stays on the Microsoft Backbone. No extra hops or public Internet involved. Customers typically leverage VNet peering in the hub-and-spoke topology. The hub consists of shared services and gateways, and the spokes comprise business units or applications.
Today I’d like to do a refresh of a unique and powerful functionality we’ve supported from day one with VNet peering. Gateway transit enables you to use a peered VNet’s gateway for connecting to on-premises instead of creating a new gateway for connectivity.
Today we are excited to launch two new key capabilities to Azure Firewall.
Threat intelligence based filtering Service tags filtering
Azure Firewall is a cloud native firewall-as-a-service offering which enables customers to centrally govern all their traffic flows using a DevOps approach. The service supports both application (such as *.github.com), and network level filtering rules. It is highly available and auto scales as your traffic grows.
Threat intelligence based filtering (preview)
Microsoft has a rich signal of both internal threat intelligence data, as well as third party sourced data. Our vast team of data scientists and cybersecurity experts are constantly mining this data to create a high confidence list of known malicious IP addresses and domains. Azure firewall can now be configured to alert and deny traffic to and from known malicious IP addresses and domains in near real-time. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed. The Microsoft Intelligent Security Graph powers Microsoft Threat Intelligence and provides security in multiple Microsoft products and services, including Azure Security Center and Azure Sentinel.
Threat intelligence-based filtering is default-enabled in alert mode for all Azure Firewall deployments, providing logging of all matching indicators. Customers can adjust behavior
A network virtual appliance (NVA) is a virtual appliance primarily focused on network functions virtualization. A typical network virtual appliance involves various layers four to seven functions like firewall, WAN optimizer, application delivery controllers, routers, load balancers, IDS/IPS, proxies, SD-WAN edge, and more. While the public cloud may provide some of these functionalities natively, it is quite common to see customers deploying network virtual appliances from independent software vendors (ISV). These capabilities in the public cloud enable hybrid solutions and are generally available through the Azure Marketplace.
What exactly is the network virtual appliance in the cloud?
A network virtual appliance is often a full Linux virtual machine (VM) image consisting of a Linux kernel and includes user level applications and services. When a VM is created, it first boots the Linux kernel to initialize the system and then starts up any application or management services needed to make the network virtual appliance functional. The cloud provider is responsible for the compute resources, while the ISV provides the image that represents the software stack of the virtual appliance.
Similar to a standard Linux distribution, the Linux kernel is integral to the NVA’s image and is provided by the ISV often
Over the past few years, SONiC (Software for Open Networking in the Cloud), our open switch OS, has been in the fast lane. A diverse group of community partners have actively engaged with us to contribute and support the evolvement of the software.
SONiC is considered a live organism, always evolving. Microsoft and the community is developing, refining, and making SONiC freely available to anyone running global scale or cloud-type networks or just have a healthy interest in advanced networking.
Being in control of the network fabric and particularly having a hardware agnostic approach across larger heterogenous networks is critical. SONiC was created to provide those foundational attributes we ourselves needed when we set out to build our global network which powers both Azure and our other cloud services.
Recently, SONiC has received several enhancements and updates, along with additions to the ecosystem contributing to SONiC’s success.
Let’s take a look at what is new.
Global support now available
We are excited to see SONiC and its sibling SAI (Switch Abstraction Interface) being adopted by many global network innovators. Recently, both Dell EMC and Mellanox announced that SONiC will feature as switch OS options for customers using their respective hardware
We are excited to announce the general availability of private endpoint in HDInsight clusters deployed in a virtual network. This feature enables enterprises to better isolate access to their HDInsight clusters from the public internet and enhance their security at the networking layer.
Previously, when customers deployed an HDI cluster in a virtual network, there was only one public endpoint available in the form of https://<CLUSTERNAME>.azurehdinsight.net. This endpoint resolves to a public IP for accessing the cluster. Customers who wanted to restrict the incoming traffic had to use network security group (NSG) rules. Specifically, they had to white-list the IPs of both the HDInsight management traffic as well as the end users who wanted to access the cluster. These end users might have already been located inside the virtual network, but they had to be white-listed to be able to reach the public endpoint. It was hard to identify and white-list these end users’ dynamic IPs, as they would often change.
With the introduction of private endpoint, customers can now use NSG rules to separate access from the public internet and end users that are within the virtual network’s trusted boundary. The virtual network can be extended to the on-premise