“When I first kicked off this Advancing Reliability blog series in my post last July, I highlighted several initiatives underway to keep improving platform availability, as part of our commitment to provide a trusted set of cloud services. One area I mentioned was fault injection, through which we’re increasingly validating that systems will perform as designed in the face of failures. Today I’ve asked our Principal Program Manager in this space, Chris Ashton, to shed some light on these broader ‘chaos engineering’ concepts, and to outline Azure examples of how we’re already applying these, together with stress testing and synthetic workloads, to improve application and service resilience.” – Mark Russinovich, CTO, Azure
Developing large-scale, distributed applications has never been easier, but there is a catch. Yes, infrastructure is provided in minutes thanks to your public cloud, there are many language options to choose from, swaths of open source code available to leverage, and abundant components and services in the marketplace to build upon. Yes, there are good reference guides that help give a leg up on your solution architecture and design, such as the Azure Well-Architected Framework and other resources in the Azure Architecture Center. But while application development
The pandemic continues to test business principles, models, and strategies organizations once thought to be bedrock truths of business. The COVID-19 crisis has challenged everything, from leadership principles, financial models, operations, and sales process, to technology decisions and platform strategies. Organizations have been forced to quickly adapt to maintain efficient operations in these difficult times. Technology has remained the common driver throughout this period of worldwide adaptation to change.
The cloud has surged to the center of the recent digital transformation efforts, by quickly creating new solutions securely and reliably, meeting new business challenges, and driving transformation with continuous technological innovation. In meeting the challenges posed by the global pandemic, the cloud is driving digital transformation faster than ever with more organizations adopting cloud technologies.
Microsoft stands with our partners, and we’re committed to your efforts, enabling customers for successful cloud use, and harnessing the wave of innovation for organizations across the globe during this challenging time.
At Microsoft Inspire, we continue to invest in our customer’s success on Azure focusing on these four priorities:
Generating confidence in their cloud journey, providing technical guidance and skills development resources. Focusing on processes and operations on their terms, at their pace through
Enterprises and teams are adopting DevOps technologies combined with people and processes to deliver high-quality code, with faster release cycles and continuous delivery of value, to achieve higher levels of satisfaction for their own customers.
However, it can often get difficult to craft CI/CD pipelines by editing multiple YAMLs to stitch your code to cloud automation workflows. Teams end up spending considerable time and effort setting up and switching between different discrete tools during their day-to-day development cycles.
In November, GitHub Actions for Azure became generally available to automate deploying your app code in GitHub to Azure directly from their repositories. Building on this, at Microsoft Build 2020 we announced that GitHub Actions for Azure are now integrated into Visual Studio Code, Azure CLI, and the Azure Portal simplifying the experience of deploying to Azure from your preferred entry points. Download the new Visual Studio Code extension or install the Azure Command-Line Interface (CLI) extension for GitHub Actions.
GitHub Actions for Azure can now deploy any enterprise application
GitHub Actions gives you the flexibility to build an automated software development lifecycle workflow. To help development teams easily create workflows to build, test, package, release, and deploy to Azure, more than 30 GitHub
Whether you’re a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you’re spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.
We’re always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:
Azure Spot Virtual Machines now generally available. Monitoring your reservation and Marketplace purchases with budgets. Automate cost savings with Azure Resource Graph. Azure Cost Management covered by FedRAMP High. Tell us about your reporting goals. New ways to save money with Azure. New videos and learning opportunities. Documentation updates.
Let’s dig into the details.
Azure Spot Virtual Machines now generally available
We all want to save money. We often look at our largest workloads for savings opportunities, but make sure you don’t stop there. You may
At Microsoft Ignite 2019, we announced general availability of the new SAP HANA Large Instances powered by the 2nd Generation Intel Xeon Scalable processors, formally Cascade Lake, supporting Intel® Optane™ persistent memory (PMem).
Microsoft’s largest SAP customers are continuing to consolidate their business functions and growing their footprint. S/4 HANA workloads demand increasingly larger nodes as they scale up. Some scenarios for high availability/disaster recovery (HA/DR) and multi-tier data needs are adding to the complexity of operations.
In partnership with Intel and SAP, we have worked to develop the new HANA Large Instances with Intel Optane PMem offering higher memory density and in-memory data persistence capabilities. Coupled with 2nd Generation Intel Xeon Scalable processors, these instances provide higher performance and higher memory to processor ratio.
For SAP HANA solutions, these new offerings help lower total cost of ownership (TCO), simplify the complex architectures for HA/DR and multi-tier data, and offer 22 times faster reload times. The new HANA large instances extend the broad array of the existing large instances offering with the purpose built capabilities critical for running SAP HANA workloads.
The new S224 HANA Large Instances support 3 TB to 9 TB of memory with four socket
Organizations and teams that adopt DevOps methodologies are consistently seeing improvements in their ability to deliver high-quality code, with faster release cycles, and ultimately achieve higher level of satisfaction for their own customers, whether they’re internal or external. Continuous Integration and Continuous Delivery (CI/CD) is one of the pillars of DevOps, consisting in automatically building, testing and deploying applications, but setting up a full CI/CD pipeline can be a complex task.
Today, we’re sharing the launch of the Deploy to Azure extension for Visual Studio Code. This new extension allows developers working in Visual Studio Code to seamlessly create, build, and deploy their apps in a continuous manner to the cloud, without leaving the editor.
Deploy to Azure extension
The Deploy to Azure extension works with both GitHub Actions and Azure Pipelines. It helps developers by auto-generating a CI/CD pipeline definition that takes care of building and deploying your app to the cloud with Azure. You can use Deploy to Azure extension to deploy application code present in your local system, or in Azure Repos or GitHub. We plan to expand the scope to other Git repositories in future.
You can use this extension to set up CI/CD pipeline for
Burst encoding in the cloud with Azure and Media Excel HERO platform.
Content creation has never been as in demand as it is today. Both professional and user-generated content has increased exponentially over the past years. This puts a lot of stress on media encoding and transcoding platforms. Add the upcoming 4K and even 8K to the mix and you need a platform that can scale with these variables. Azure Cloud compute offers a flexible way to grow with your needs. Microsoft offers various tools and products to fully support on-premises, hybrid, or native cloud workloads. Azure Stack offers support to a hybrid scenario for your computing needs and Azure ARC helps you to manage hybrid setups.
Finding a solution
Generally, 4K/UHD live encoding is done on dedicated hardware encoder units, which cannot be hosted in a public cloud like Azure. With such dedicated hardware units hosted on-premise that need to push 4K into the Azure data center the immediate problem we face is a need for high bandwidth network connection between the encoder unit on-premise and Azure data center. In general, it’s a best practice to ingest into multiple regions, increasing the load on the network connected between the
Last month, we started sharing the DevOps journey at Microsoft through the stories of several teams at Microsoft and how they approach DevOps adoption. As the next story in this series, we want to share the transition one team made from a classic operations role to a Site Reliability Engineering (SRE) role: the story of the Xbox Reliability Engineering and Operations (xREO) team.
This transition was not easy and came out of necessity when Microsoft decided to bring Xbox games to gamers wherever they are through cloud game streaming (project xCloud). In order to deliver cutting-edge technology with top-notch customer experience, the team had to redefine the way it worked—improving collaboration with the development team, investing in automation, and get involved in the early stages of the application lifecycle. In this blog, we’ll review some of the key learnings the team collected along the way. To explore the full story of the team, see the journey of the xREO team.
Consistent gameplay requirements and the need to collaborate
A consistent experience is crucial to a successful game streaming session. To ensure gamers experience a game streamed from the cloud, it has to feel like it is running on a nearby
Today, more and more organizations are focused on delivering new digital solutions to customers and finding that the need for increased agility, improved processes, and collaboration between development and operation teams is becoming business-critical. For over a decade, DevOps has been the answer to these challenges. Understanding the need for DevOps is one thing, but the actual adoption of DevOps in the real world is a whole other challenge. How can an organization with multiple teams and projects, with deeply rooted existing processes, and with considerable legacy software change its ways and embrace DevOps?
At Microsoft, we know something about these challenges. As a company that has been building software for decades, Microsoft consists of thousands of engineers around the world that deliver many different products. From Office, to Azure, to Xbox we also found we needed to adapt to a new way of delivering software. The new era of the cloud unlocks tremendous potential for innovation to meet our customers’ growing demand for richer and better experiences—while our competition is not slowing down. The need to accelerate innovation and to transform how we work is real and urgent.
The road to transformation is not easy and we believe that
Access to Diagnostic Logs is essential for any healthcare service where being compliant with regulatory requirements (like HIPAA) is a must. The feature in Azure API for FHIR that makes this happen is Diagnostic settings in the Azure Portal UI. For details on how Azure Diagnostic Logs work, please refer to the Azure Diagnostic Log documentation.
At this time, service is emitting the following fields in the Audit Log:
Notes TimeGenerated DateTime Date and Time of the event.
String CorrelationId String RequestUri String The request URI. FhirResourceType String The resource type the operation was executed for. StatusCode Int The HTTP status code (e.g., 200). ResultType String The available value currently are ‘Started’, ‘Succeeded’, or ‘Failed.’ OperationDurationMs Int The milliseconds it took to complete the request. LogCategory String The log category. We are currently emitting ‘AuditLogs’ for the value. CallerIPAddress String The caller’s IP address. CallerIdentityIssuer String Issuer CallerIdentityObjectId String Object_Id CallerIdentity Dynamic A generic property bag containing identity information. Location String The location of the server that processed the request (e.g., South Central US). How do