Category Archives : Networking

09

Dec

Networking enables the new world of Edge and 5G Computing
Networking enables the new world of Edge and 5G Computing

At the recent Microsoft Ignite 2019 conference, we introduced two new and related perspectives on the future and roadmap of edge computing.

Before getting further into the details of Network Edge Compute (NEC) and Multi-access Edge Compute (MEC), let’s take a look at the key scenarios which are emerging in line with 5G network deployments. For a decade, we have been working with customers to move their workloads from their on-premises locations to Azure to take advantage of the massive economies of scale of the public cloud. We get this scale with the ongoing build-out of new Azure regions and the constant increase of capacity in our existing regions, reducing the overall costs of running data centers.

For most workloads, running in the cloud is the best choice. Our ability to innovate and run Azure as efficiently as possible allows customers to focus on their business instead of managing physical hardware and associated space, power, cooling, and physical security. Now, with the advent of 5G mobile technology promising larger bandwidth and better reliability, we see significant requirements for low latency offerings to enable scenarios such as smart-buildings, factories, and agriculture. The “smart” prefix highlights that there is a compute-intensive workload,

Share

02

Dec

Application Gateway Ingress Controller for Azure Kubernetes Service

https://azure.microsoft.com/blog/application-gateway-ingress-controller-for-azure-kubernetes-service/

Share

05

Nov

https://azure.microsoft.com/blog/enabling-and-securing-ubiquitous-compute-from-intelligent-cloud-to-intelligent-edge/Enterprises are embracing the cloud to run their mission-critical workloads. The number of connected devices on and off-premises, and the data they generate continue to increase requiring new enterprise network edge architectures. We call this the intelligent edge – compute READ MORE

Share

28

Oct

Customize networking for DR drills: Azure Site Recovery

One of the most important features of a disaster recovery tool is failover readiness. Administrators ensure this by watching out for health signals from the product. Some also choose to set up their own monitoring solutions to track readiness. End to end testing is conducted using disaster recovery (DR) drills every three to six months. Azure Site Recovery offers this capability for replicated items and customers rely heavily on test failovers or planned failovers to ensure that the applications work as expected. With Azure Site Recovery, customers are encouraged to use non-production network for test failover so that IP addresses and networking components are available in the target production network in case of an actual disaster. Even with non-production network, the drill should be the exact replica of the actual failover.

Until now, it has been close to being the replica. The networking configurations for test failover did not entirely match the failover settings. Choice of subnet, network security group, internal load balancer, and public IP address per network interfacing controller (NIC) could not be made. This means that customer had to ensure a particular alphabetical naming convention of subnets in test failover network to ensure the replicated items are

Share

17

Sep

Announcing Azure Private Link
Announcing Azure Private Link

Customers love the scale of Azure that gives them the ability to expand across the globe, and while being highly available. Through the rapidly growing adoption of Azure, customers need to access the data and services privately and securely from their networks grow exponentially. To help with this, we’re announcing the preview of Azure Private Link.

Azure Private Link is a secure and scalable way for Azure customers to consume Azure Services like Azure Storage or SQL, Microsoft Partner Services or their own services privately from their Azure Virtual Network (VNet). The technology is based on a provider and consumer model where the provider and the consumer are both hosted in Azure. A connection is established using a consent-based call flow and once established, all data that flows between the service provider and service consumer is isolated from the internet and stays on the Microsoft network. There is no need for gateways, network address translation (NAT) devices, or public IP addresses to communicate with the service.

Azure Private Link brings Azure services inside the customer’s private VNet. The service resources can be accessed using the private IP address just like any other resource in the VNet. This significantly simplifies the

Share

08

Sep

Satellite connectivity expands reach of Azure ExpressRoute across the globe

Staying connected to access and ingest data in today’s highly distributed application environments is paramount for any enterprise. Many businesses need to operate in and across highly unpredictable and challenging conditions. For example, energy, farming, mining, and shipping often need to operate in remote, rural, or other isolated locations with poor network connectivity.

With the cloud now the de facto and primary target for the bulk of application and infrastructure migrations, access from remote and rural locations becomes even more important. The path to realizing the value of the cloud starts with a hybrid environment access resources with dedicated and private connectivity.

Network performance for these hybrid scenarios from rural and remote sites becomes increasingly critical. With globally connected organizations, the explosive number of connected devices and data in the Cloud, as well as emerging areas such as autonomous driving and traditional remote locations such as cruise ships are directly affected by connectivity performance.  Other examples requiring highly available, fast, and predictable network service include managing supply chain systems from remote farms or transferring data to optimize equipment maintenance in aerospace.

Today, I want to share the progress we have made to help customers address and solve these issues. Satellite

Share

27

Aug

Latency is the new currency of the Cloud: Announcing 31 new Azure edge sites

Providing users fast and reliable access to their cloud services, apps, and content is pivotal to a business’ success.

The latency when accessing cloud-based services can be the inhibitor to cloud adoption or migration. In most cases, this is caused by commercial internet connections that aren’t tailored to today’s global cloud needs. Through deployment and operation of globally and strategically placed edge sites, Microsoft dramatically accelerates the performance and experience when you are accessing apps, content, or services such as Azure and Office 365 on the Microsoft global network.

Edges optimize network performance through local access points to and from the vast Microsoft global network, in many cases providing 10x the acceleration to access and consume cloud-based content and services from Microsoft.

What is the network edge?

Solely providing faster network access isn’t enough, and applications need intelligent services to expedite and simplify how a global audience accesses and experiences their offerings. Edge sites provide application development teams increased visibility and higher availability to access services that improve how they deliver global applications.

Edge sites benefit infrastructure and development teams in multiple key areas Improved optimization for application delivery through Azure Front Door (AFD.) Microsoft recently announced AFD, which allows

Share

26

Aug

Azure Load Balancer becomes more efficient
Azure Load Balancer becomes more efficient

Azure introduced an advanced, more efficient Load Balancer platform in late 2017. This platform adds a whole new set of abilities for customer workloads using the new Standard Load Balancer. One of the key additions the new Load Balancer platform brings, is a simplified, more predictable and efficient outbound connectivity management.

While already integrated with Standard Load Balancer, we are now bringing this advantage to the rest of Azure deployments. In this blog, we will explain what it is and how it makes life better for all our consumers. An important change that we want to focus on is the outbound connectivity behavior pre and post platform integration as this is a very important design point for our customers.

Load Balancer and Source NAT

Azure deployments use one or more of three scenarios for outbound connectivity, depending on the customer’s deployment model and the resources utilized and configured. Azure uses Source Network Address Translation (SNAT) to enable these scenarios. When multiple private IP addresses or roles share the same public IP (public IP address assign to Load Balancer, used for outbound rules or automatically assigned public IP address for standalone virtual mahines), Azure uses port masquerading SNAT (PAT) to translate

Share

08

Aug

Building Resilient ExpressRoute Connectivity for Business Continuity and Disaster Recovery

As more and more organizations adopt Azure for their business-critical workloads, the connectivity between organizations’ on-premises networks and Microsoft becomes crucial. ExpressRoute provides the private connectivity between on-premises networks and Microsoft. By default, an ExpressRoute circuit provides redundant network connections to Microsoft backbone network and is designed for carrier grade high availability. However, the high availability of a network connectivity is as good as the robustness of the weakest link in its end-to-end path. Therefore, it is imperative that the customer and the service provider segments of ExpressRoute connectivity are also architected for high availability.

Designing for high availability with ExpressRoute addresses these design considerations and talks about how to architect a robust end-to-end ExpressRoute connectivity between a customer on-premises network and Microsoft network core. The document addresses how to maximize high availability of an ExpressRoute in general, as well as components specific to Private peering and to Microsoft peering.

Private Peering High Availability

Each component of the ExpressRoute connectivity is key to build for high availability, including the first mile from on-premises to peering location, from multiple circuits to the same virtual network (VNet), and the virtual network gateway within the VNet.

To improve the availability of ExpressRoute virtual

Share

29

Jul

Choosing between Azure VNet Peering and VNet Gateways
Choosing between Azure VNet Peering and VNet Gateways

As customers adopt Azure and the cloud, they need fast, private, and secure connectivity across regions and Azure Virtual Networks (VNets). Based on the type of workload, customer needs vary. For example, if you want to ensure data replication across geographies you need a high bandwidth, low latency connection. Azure offers connectivity options for VNet that cater to varying customer needs, and you can connect VNets via VNet peering or VPN gateways.

It is not surprising that VNet is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. Read more about VNet’s key benefits in our documentation “What is Azure Virtual Network?

VNet peering

VNet peering enables you to seamlessly connect Azure virtual networks. Once peered, the VNets appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same VNet, through private IP addresses only. No public internet is involved. You can peer

Share