Connectivity has gone through a fundamental shift as more workloads and services have moved to the Cloud. Traditional enterprise Wide Area Networks (WAN) have been fixed in nature, without the ability to dynamically scale to meet modern customer demands. For customers seeking to increasingly apply a cloud-first approach as the basis for their app and networking strategy, hybrid cloud enables applications and services to be deployed cross-premises as a fully connected and seamless architecture. The connectivity across premises is moving to utilize a more cloud-first model, with services offered by global hyper-scale networks.
Microsoft global network
Microsoft operates one of the largest networks on the globe spanning over 130,000 miles of terrestrial and subsea fiber cable systems across 6 continents. Besides Azure, the global network powers all our cloud services, including Bing, Office 365 and Xbox. The network carries more than 30 billion packets per second at any one time and is accessible for peering, private connectivity and application content delivery through our more than 160 global network PoPs. Microsoft continuously add new network PoPs to optimize the experience for our customers accessing Microsoft services.
The global network is built and operated using intelligent software-defined traffic engineering technologies, that allow Microsoft
Burst encoding in the cloud with Azure and Media Excel HERO platform.
Content creation has never been as in demand as it is today. Both professional and user-generated content has increased exponentially over the past years. This puts a lot of stress on media encoding and transcoding platforms. Add the upcoming 4K and even 8K to the mix and you need a platform that can scale with these variables. Azure Cloud compute offers a flexible way to grow with your needs. Microsoft offers various tools and products to fully support on-premises, hybrid, or native cloud workloads. Azure Stack offers support to a hybrid scenario for your computing needs and Azure ARC helps you to manage hybrid setups.
Finding a solution
Generally, 4K/UHD live encoding is done on dedicated hardware encoder units, which cannot be hosted in a public cloud like Azure. With such dedicated hardware units hosted on-premise that need to push 4K into the Azure data center the immediate problem we face is a need for high bandwidth network connection between the encoder unit on-premise and Azure data center. In general, it’s a best practice to ingest into multiple regions, increasing the load on the network connected between the
At the recent Microsoft Ignite 2019 conference, we introduced two new and related perspectives on the future and roadmap of edge computing.
Before getting further into the details of Network Edge Compute (NEC) and Multi-access Edge Compute (MEC), let’s take a look at the key scenarios which are emerging in line with 5G network deployments. For a decade, we have been working with customers to move their workloads from their on-premises locations to Azure to take advantage of the massive economies of scale of the public cloud. We get this scale with the ongoing build-out of new Azure regions and the constant increase of capacity in our existing regions, reducing the overall costs of running data centers.
For most workloads, running in the cloud is the best choice. Our ability to innovate and run Azure as efficiently as possible allows customers to focus on their business instead of managing physical hardware and associated space, power, cooling, and physical security. Now, with the advent of 5G mobile technology promising larger bandwidth and better reliability, we see significant requirements for low latency offerings to enable scenarios such as smart-buildings, factories, and agriculture. The “smart” prefix highlights that there is a compute-intensive workload,
https://azure.microsoft.com/blog/enabling-and-securing-ubiquitous-compute-from-intelligent-cloud-to-intelligent-edge/Enterprises are embracing the cloud to run their mission-critical workloads. The number of connected devices on and off-premises, and the data they generate continue to increase requiring new enterprise network edge architectures. We call this the intelligent edge – compute READ MORE
One of the most important features of a disaster recovery tool is failover readiness. Administrators ensure this by watching out for health signals from the product. Some also choose to set up their own monitoring solutions to track readiness. End to end testing is conducted using disaster recovery (DR) drills every three to six months. Azure Site Recovery offers this capability for replicated items and customers rely heavily on test failovers or planned failovers to ensure that the applications work as expected. With Azure Site Recovery, customers are encouraged to use non-production network for test failover so that IP addresses and networking components are available in the target production network in case of an actual disaster. Even with non-production network, the drill should be the exact replica of the actual failover.
Until now, it has been close to being the replica. The networking configurations for test failover did not entirely match the failover settings. Choice of subnet, network security group, internal load balancer, and public IP address per network interfacing controller (NIC) could not be made. This means that customer had to ensure a particular alphabetical naming convention of subnets in test failover network to ensure the replicated items are
Customers love the scale of Azure that gives them the ability to expand across the globe, and while being highly available. Through the rapidly growing adoption of Azure, customers need to access the data and services privately and securely from their networks grow exponentially. To help with this, we’re announcing the preview of Azure Private Link.
Azure Private Link is a secure and scalable way for Azure customers to consume Azure Services like Azure Storage or SQL, Microsoft Partner Services or their own services privately from their Azure Virtual Network (VNet). The technology is based on a provider and consumer model where the provider and the consumer are both hosted in Azure. A connection is established using a consent-based call flow and once established, all data that flows between the service provider and service consumer is isolated from the internet and stays on the Microsoft network. There is no need for gateways, network address translation (NAT) devices, or public IP addresses to communicate with the service.
Azure Private Link brings Azure services inside the customer’s private VNet. The service resources can be accessed using the private IP address just like any other resource in the VNet. This significantly simplifies the
Staying connected to access and ingest data in today’s highly distributed application environments is paramount for any enterprise. Many businesses need to operate in and across highly unpredictable and challenging conditions. For example, energy, farming, mining, and shipping often need to operate in remote, rural, or other isolated locations with poor network connectivity.
With the cloud now the de facto and primary target for the bulk of application and infrastructure migrations, access from remote and rural locations becomes even more important. The path to realizing the value of the cloud starts with a hybrid environment access resources with dedicated and private connectivity.
Network performance for these hybrid scenarios from rural and remote sites becomes increasingly critical. With globally connected organizations, the explosive number of connected devices and data in the Cloud, as well as emerging areas such as autonomous driving and traditional remote locations such as cruise ships are directly affected by connectivity performance. Other examples requiring highly available, fast, and predictable network service include managing supply chain systems from remote farms or transferring data to optimize equipment maintenance in aerospace.
Today, I want to share the progress we have made to help customers address and solve these issues. Satellite
Providing users fast and reliable access to their cloud services, apps, and content is pivotal to a business’ success.
The latency when accessing cloud-based services can be the inhibitor to cloud adoption or migration. In most cases, this is caused by commercial internet connections that aren’t tailored to today’s global cloud needs. Through deployment and operation of globally and strategically placed edge sites, Microsoft dramatically accelerates the performance and experience when you are accessing apps, content, or services such as Azure and Office 365 on the Microsoft global network.
Edges optimize network performance through local access points to and from the vast Microsoft global network, in many cases providing 10x the acceleration to access and consume cloud-based content and services from Microsoft.
What is the network edge?
Solely providing faster network access isn’t enough, and applications need intelligent services to expedite and simplify how a global audience accesses and experiences their offerings. Edge sites provide application development teams increased visibility and higher availability to access services that improve how they deliver global applications.
Edge sites benefit infrastructure and development teams in multiple key areas Improved optimization for application delivery through Azure Front Door (AFD.) Microsoft recently announced AFD, which allows
Azure introduced an advanced, more efficient Load Balancer platform in late 2017. This platform adds a whole new set of abilities for customer workloads using the new Standard Load Balancer. One of the key additions the new Load Balancer platform brings, is a simplified, more predictable and efficient outbound connectivity management.
While already integrated with Standard Load Balancer, we are now bringing this advantage to the rest of Azure deployments. In this blog, we will explain what it is and how it makes life better for all our consumers. An important change that we want to focus on is the outbound connectivity behavior pre and post platform integration as this is a very important design point for our customers.
Load Balancer and Source NAT
Azure deployments use one or more of three scenarios for outbound connectivity, depending on the customer’s deployment model and the resources utilized and configured. Azure uses Source Network Address Translation (SNAT) to enable these scenarios. When multiple private IP addresses or roles share the same public IP (public IP address assign to Load Balancer, used for outbound rules or automatically assigned public IP address for standalone virtual mahines), Azure uses port masquerading SNAT (PAT) to translate