Last month, we started sharing the DevOps journey at Microsoft through the stories of several teams at Microsoft and how they approach DevOps adoption. As the next story in this series, we want to share the transition one team made from a classic operations role to a Site Reliability Engineering (SRE) role: the story of the Xbox Reliability Engineering and Operations (xREO) team.
This transition was not easy and came out of necessity when Microsoft decided to bring Xbox games to gamers wherever they are through cloud game streaming (project xCloud). In order to deliver cutting-edge technology with top-notch customer experience, the team had to redefine the way it worked—improving collaboration with the development team, investing in automation, and get involved in the early stages of the application lifecycle. In this blog, we’ll review some of the key learnings the team collected along the way. To explore the full story of the team, see the journey of the xREO team.
Consistent gameplay requirements and the need to collaborate
A consistent experience is crucial to a successful game streaming session. To ensure gamers experience a game streamed from the cloud, it has to feel like it is running on a nearby
Today, more and more organizations are focused on delivering new digital solutions to customers and finding that the need for increased agility, improved processes, and collaboration between development and operation teams is becoming business-critical. For over a decade, DevOps has been the answer to these challenges. Understanding the need for DevOps is one thing, but the actual adoption of DevOps in the real world is a whole other challenge. How can an organization with multiple teams and projects, with deeply rooted existing processes, and with considerable legacy software change its ways and embrace DevOps?
At Microsoft, we know something about these challenges. As a company that has been building software for decades, Microsoft consists of thousands of engineers around the world that deliver many different products. From Office, to Azure, to Xbox we also found we needed to adapt to a new way of delivering software. The new era of the cloud unlocks tremendous potential for innovation to meet our customers’ growing demand for richer and better experiences—while our competition is not slowing down. The need to accelerate innovation and to transform how we work is real and urgent.
The road to transformation is not easy and we believe that
Access to Diagnostic Logs is essential for any healthcare service where being compliant with regulatory requirements (like HIPAA) is a must. The feature in Azure API for FHIR that makes this happen is Diagnostic settings in the Azure Portal UI. For details on how Azure Diagnostic Logs work, please refer to the Azure Diagnostic Log documentation.
At this time, service is emitting the following fields in the Audit Log:
Notes TimeGenerated DateTime Date and Time of the event.
String CorrelationId String RequestUri String The request URI. FhirResourceType String The resource type the operation was executed for. StatusCode Int The HTTP status code (e.g., 200). ResultType String The available value currently are ‘Started’, ‘Succeeded’, or ‘Failed.’ OperationDurationMs Int The milliseconds it took to complete the request. LogCategory String The log category. We are currently emitting ‘AuditLogs’ for the value. CallerIPAddress String The caller’s IP address. CallerIdentityIssuer String Issuer CallerIdentityObjectId String Object_Id CallerIdentity Dynamic A generic property bag containing identity information. Location String The location of the server that processed the request (e.g., South Central US). How do
Welcome back to another release of the unified Azure Data client libraries. For the most part, the API surface areas of the SDKs have been stabilized based on your feedback. Thank you to everyone who has been submitting issues on GitHub and keep the feedback coming.
Please grab the October preview libraries and try them out—throw demanding performance scenarios at them, integrate them with other services, try to debug an issue, or generally build your scenario and let us know what you find.
Our goal is to release these libraries before the end of the year but we are driven by quality and feedback and your participation is key.
As we did for the last three releases, we have created four pages that unify all the key information you need to get started and give feedback. You can find them here:
For those of you who want to dive deep into the content, the release notes linked above and the changelogs they point to give more details on what has changed. Here we are calling out a few high-level items.
APIs locking down
The surface area for Azure Key Vault and Storage
With each passing year, more and more developers are building cloud-native applications. As developers build more complex applications they are looking to innovators like Microsoft Azure and HashiCorp to reduce the complexity of building and operating these applications. HashiCorp and Azure have worked together on a myriad of innovations. Examples of this innovation include tools that connect cloud-native applications to legacy infrastructure and tools that secure and automate the continuous deployment of customer applications and infrastructure. Azure is deeply committed to being the best platform for open source software developers like HashiCorp to deliver their tools to their customers in an easy-to use, integrated way. Azure innovation like the managed applications platform that power HashiCorp’s Consul Service on Azure are great examples of this commitment to collaboration and a vibrant open source startup ecosystem. We’re also committed to the development of open standards that help these ecosystems move forward and we’re thrilled to have been able to collaborate with HashiCorp on both the CNAB (Cloud Native Application Bundle) and SMI (Service Mesh Interface) specifications.
Last year at HashiConf 2018, I had the opportunity to share how we had started to integrate Terraform and Packer into the Azure platform. I’m incredibly
Today we are announcing a preview of a new feature of Azure Policy. The guest configuration capability, which audits settings inside Linux and Windows virtual machines (VMs), is now ready for customers to author and publish custom content.
The guest configuration platform has been generally available for built-in content provided by Microsoft. Customers are using this platform to audit common scenarios such as who has access to their servers, what applications are installed, if certificates are up to date, and whether servers can connect to network locations.
Starting today, customers can use new tooling published to the PowerShell Gallery to author, test, and publish their own content packages both from their developer workstation and from CI/CD platforms such as Azure DevOps.
For example, if you are running an application on an Azure virtual machine that was developed by your organization, you can audit the configuration of that application in Azure and be notified when one of the VMs in your fleet is not compliant.
This is also an important milestone for compliance teams who need to audit configuration baselines. There is already a built-in policy to audit Windows machines using Microsoft’s recommended security configuration baseline. Custom content expands the
https://azure.microsoft.com/blog/announcing-the-general-availability-of-python-support-in-azure-functions/Python support for Azure Functions is now generally available and ready to host your production workloads across data science and machine learning, automated resource management, and more. You can now develop Python 3.6 apps to run on the cross-platform, open-source READ MORE
On Thursday, August 8, 2019, GitHub announced the preview of GitHub Actions with support for Continuous Integration and Continuous Delivery (CI/CD). Actions makes it possible to create simple, yet powerful pipelines and automate software compilation and delivery. Today, we are announcing the preview of Azure Actions for GitHub.
With these new Actions, developers can quickly build, test, and deploy code from GitHub repositories to the cloud with Azure.
You can find our first set of Actions grouped into four repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure.
azure/actions (login): Authenticate with an Azure subscription. azure/appservice-actions: Deploy apps to Azure App Services using the features Web Apps and Web Apps for Containers. azure/container-actions: Connect to container registries, including Docker Hub and Azure Container Registry, as well as build and push container images. azure/k8s-actions: Connect and deploy to a Kubernetes cluster, including Azure Kubernetes Service (AKS). Connect to Azure
The login action (azure/actions) allows you to securely connect to an Azure subscription.
MATCH_RECOGNIZE in Azure Stream Analytics significantly reduces the complexity and cost associated with building, modifying, and maintaining queries that match sequence of events for alerts or further data computation.
What is Azure Stream Analytics?
Azure Stream Analytics is a fully managed serverless PaaS offering on Azure that enables customers to analyze and process fast moving streams of data and deliver real-time insights for mission critical scenarios. Developers can use a simple SQL language, extensible to include custom code, in order to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with milli-second latencies.
Traditional way to incorporate pattern matching in stream processing
Many customers use Azure Stream Analytics to continuously monitor massive amounts of data, detecting sequence of events and deriving alerts or aggregating data from those events. This in essence is pattern matching.
For pattern matching, customers traditionally relied on multiple joins, each one detecting a single event in particular. These joins are combined to find a sequence of events, compute results or create alerts. Developing queries for pattern matching is a complex process and very error prone, difficult to maintain and debug. Also, there are limitations when trying to express more complex
If you’re experiencing problems with your applications, a great place to start investigating solutions is through your Azure Service Health dashboard. In this blog post, we’ll explore the differences between the Azure status page and Azure Service Health. We’ll also show you how to get started with Service Health alerts so you can stay better informed about service issues and take action to improve your workloads’ availability.
How and when to use the Azure status page
The Azure status page works best for tracking major outages, especially if you’re unable to log into the Azure portal or access Azure Service Health. Many Azure users visit the status page regularly. It predates Azure Service Health and has a friendly format that shows the status of all Azure services and regions at a glance.
The Azure status page, however, doesn’t show all information about the health of your Azure services and regions. The status page isn’t personalized, so you need to know exactly which services and regions you’re using and locate them in the grid. The status page also doesn’t include information about non-outage events that could affect your availability. For example, planned maintenance events and health advisories (think service retirements