We’re committed to making Azure work great with the open source tools you know and love, and if you’re using Chef products or open source projects, there’s never been a better time to try Azure. We’ve had a rich history of partnership and collaboration with Chef to deliver automation tools that help you with cloud adoption. Today, at ChefConf, the Chef and Azure teams are excited to announce the inclusion of Chef InSpec, directly in Azure Cloud Shell, as well as the new Chef Developer Hub in Azure Docs.
Inspec in Azure Cloud Shell
In addition to other open source tools like Ansible and Terraform that are already available, today we are announcing the availability of Chef Inspec, pre-installed and ready to use for every Azure user in the Azure Cloud Shell. This makes bringing your Inspec tests to Azure super-simple, in fact it’s the easiest way to try out Inspec – no installation or configuration required.
Figure 1: InSpec Exec within Azure Cloud Shell
Chef Developer Hub for Azure
We are launching the new Chef Developer Hub so Azure customers can more easily implement their solutions using Chef open source software. Whether you’re using Chef, Inspec or
Azure Traffic Manager, Azure’s DNS based load balancing solution, is used by customers for a wide variety of use cases including routing a global user base to endpoints on Azure that will give them the fastest, low latency experience, providing seamless auto-failover for mission critical workloads and migration from on-premises to the cloud. One key use case where customers leverage Traffic Manager is to make their software deployments smoother with minimal impact to their users by implementing a Blue-Green deployment process using Traffic Manager’s weighted round-robin routing method. This blog will show how we can implement Blue-Green deployment using Traffic Manager, but before we dive deep, let us discuss what we mean by Blue-Green deployment.
Blue-Green deployment is a software rollout method that can reduce the impact of interruptions caused due to issues in the new version being deployed. This is achieved by exposing the new version of the software to a limited set of users and expanding that user base gradually until everyone is using the new version. If at any time the new version is causing issues, for example a broken authentication workflow in the new version of a web application, all the users can be instantly* redirected
We heard from our customers that they wanted an even simpler onboarding experience to Azure Security Center. Today, we are excited to announce the general availability of Azure Security Center’s Cross Subscription Workspace Selection. This capability allows you to collect and monitor data in one location from virtual machines that run in different workspaces, subscriptions, and run queries.
When first onboarding to Azure Security Center, you’ll need to start in our Data Collection tab to provision a monitoring agent onto Azure. The agent will allow you to monitor the security state of your hybrid cloud resources. As virtual machines are being spun up and down, and workload owners across your organization are creating new workloads and resources, you need to make sure these are protected at the time they are created.
By default, we put the data that the monitoring agent collects in a Log Analytics workspace, but we give you the flexibility to use another workspace if you are using it already for other management functions.
Today, when you select a workspace for your data to reside in, you’ll see all the workspaces across all your subscriptions available. Cross Subscription Workspace Selection allows you to collect data from
We are seeing more developers building and running their applications in the public cloud. In fact, companies are using multiple public clouds to run their applications. Our customers tell us that they choose to build applications in Azure because it’s easy to get started and that they have peace of mind knowing the services that their applications rely on will be available, reliable, and secure. Today, we are going to discuss how Azure Security Center’s Just-in-Time VM Access can help you secure virtual machines that are running your applications and code.
Successful attacks on your virtual machines can create serious challenges for development. If a server is compromised, your source code could potentially be exposed, along with the proprietary algorithms or internal knowledge about the application. The pace of development can slow down because your team is focused on recovering from the attack instead of writing and reviewing code. Most importantly, an attack can affect your customers’ abilities to access your applications, impacting your brand and your business. Just-in-Time VM Access can help you reduce your exposure to attacks by limiting the amount of time management ports are open on the virtual machines running your code.
Just-in-Time VM Access
Last September, I had the privilege to publicly announce our Azure confidential computing efforts, where Microsoft Azure became the first cloud platform to enable new data security capabilities that protect customer data while in use. The Azure team, alongside Microsoft Research, Intel, Windows, and our Developer Tools group, have been working together to bring Trusted Execution Environments (TEEs) such as Intel SGX and Virtualization Based Security (VBS – previously known as Virtual Secure mode) to the cloud. TEEs protect data being processed from access outside the TEE. We’re ready to share more details about our confidential cloud vision and the work we’ve done since the announcement.
Many companies are moving their mission critical workloads and data to the cloud, and the security benefits that public clouds provide is in many cases accelerating that adoption. In their 2017 CloudView study, International Data Corporation (IDC) found that ‘improving security’ was one of the top drivers for companies to move to the cloud. However, security concerns still remain a commonly cited blocker for moving extremely sensitive IP and data scenarios to the cloud. Cloud Security Alliance (CSA) recently published the latest version of its Treacherous 12 Threats to Cloud Computing report. Not surprisingly,
Monitoring the health and performance of your Azure Kubernetes Service (AKS) cluster is important to ensure that your applications are up and running as expected. If you run applications on other Azure infrastructure, such as Virtual Machines, you have come to rely on Azure Monitor to provide near real-time, granular monitoring data. We are happy to announce that you can now rely on Azure Monitor to also track the health and performance of your AKS cluster. Let’s look at the new container health monitoring capability in Azure Monitor.
You can enable container monitoring from the Azure portal when you create an AKS cluster. You may notice the prompt for a Log Analytics workspace, and the reason for this will become clear throughout this post. For now, just know that you are providing a central location to store your container logs.
Now that you have gone through the wizard and setup AKS cluster with container health, let’s go through an example to see how you would use it. Start by clicking on Health in AKS. Let’s say, you believe there is a possible resource bottleneck somewhere in your Kubernetes cluster. Since you aren’t sure exactly what and where the issue
On February 8, 2017, we launched Managed Disks, Snapshots, and Images, which made it easy to provision and manage disks at scale on Azure. We’re now taking the next step and are excited to announce the Shared Image Gallery, which offers an easy but powerful set of tools to share VM images on Azure.
The Shared Image Gallery greatly simplifies image sharing at scale. It’s designed to make it easy for you to share your applications with others in your organization, within or across regions, enabling you to expedite regional expansion or DevOps processes, simplify your cross-region HA/DR setup and more. The Shared Image Gallery is now available in West US Central Azure and will soon expand to all Azure regions.
We will start sending invitations to join the limited public preview on the May 21, 2018. If you’re interested in joining the limited public preview, please submit this form to express your interest.
How do I use Shared Image Gallery?
The Shared Image Gallery lets you choose which images you want to share, which regions you want to make them available in, and whom you want to share them with. You can create multiple galleries so that you can
This blog was co-authored by Anitha Adusumilli, Principal Program Manager, Azure Networking and Sumeet Mittal, Program Manager, Azure Networking.
Azure Cosmos DB is Microsoft’s globally distributed, multi-model database service for mission-critical applications. Azure Cosmos DB provides turnkey global distribution, elastic scaling of throughput and storage worldwide, single-digit millisecond latencies at the 99th percentile, five well-defined consistency models, and guaranteed high availability, all backed by industry-leading comprehensive SLAs. Azure Cosmos DB automatically indexes all your data without requiring you to deal with schema or index management. It is a multi-model service and supports document, key-value, graph, and column-family data models.
Improved security capabilities
We are excited to announce the general availability of Virtual Network Service Endpoints for Azure Cosmos DB. Azure Cosmos DB uses Virtual Network Service Endpoints to create network rules that allow traffic only from selected Virtual Network and subnets. This feature is now available in all regions of Azure public cloud.
Customers can combine existing authorization mechanisms like Firewall Access Control List (ACL) with the new network boundaries to provide an enhanced security for their data. Azure Cosmos DB is the first service to allow cross region access control support where customer can restrict access to globally distributed
In December 2017, we announced general availability of the Azure M-series virtual machines (VM). These VMs host on the most powerful cloud hardware that is available across all public cloud providers. They deliver configurations up to 128 vCPUs and 4TB RAM for a single VM! Over the past few months, we have seen customers adopt and utilize M-series VMs for high-end database workloads based on SQL Server, Oracle, and other DBMS systems, even already move entire SAP landscapes into Azure.
Microsoft, as a customer of SAP, led the early adoption path by completing our own migration of Microsoft’s SAP landscape into Azure, including our 14TB SAP ERP systems which runs Microsoft’s most critical business processes on the M-series M128s VM for this application’s SQL Server DB.
To accommodate even more demanding workloads Azure has invested into accelerating database system performance with optimizations for critical write I/O, exclusively on Azure M-series VMs. Azure Write Accelerator is functionality we recently released for M-series VMs. This has been proven to accelerate performance for critical, transactional log writes that require sub-millisecond latency.
We have been working with SAP over the last few months to leverage and certify Azure M-series VMs for their SAP HANA
We are thrilled to announce the public preview of low-priority virtual machines (VMs) on VM scale sets. Low-priority VMs allow users to run their workloads at a fraction of the price, enabling significant cost savings. This offering has been available through our Azure Batch service since May 2017, and because we have seen great customer success we are expanding it to VM scale sets. This is a great option for resilient, fault-tolerant applications as these VMs are allocated using our unutilized capacity and can, therefore, be evicted. Low-priority VMs are available through VM scale sets with up to an 80 percent discount.
What are low-priority VMs?
Low-priority VMs enable you to take advantage of our unutilized capacity. The amount of available unutilized capacity can vary based on size, region, time of day, and more. When deploying Low-priority VMs in VM scale sets, Azure will allocate the VMs if there is capacity available, but there are no SLA guarantees. At any point in time when Azure needs the capacity back, we will evict low-priority VMs. Therefore, the low-priority offering is great for flexible workloads, like large processing jobs, dev/test environments, demos, and proofs of concept.
Provisioning low-priority VMs
Low-priority VMs can