In October 2018 we announced the public preview of Azure Monitor for Virtual Machines (VMs). At that time, we included support for monitoring your virtual machine scale sets from the at scale view under Azure Monitor.
Today we are announcing the public preview of monitoring your Windows and Linux VM scale sets from within the scale set resource blade. This update includes several enhancements:
In-blade monitoring for your scale set with “Top N”, aggregate, and list views across the entire scale set. Drill down experience to identify issues on a particular scale set instance. Updated mapping UI to display the entire dependency diagram across your scale set while supporting drill down maps for a single instance. UI based enablement of monitoring from the scale set resource blade. Updated examples for enabling monitoring using Azure Resource Manager templates. Use of policy to enable monitoring for your scale set. Performance
The performance views are powered using log analytics queries, offering “Top N”, aggregate, and list views to quickly find outliers or issues in your scale set based on guest level metrics for CPU, available memory, bytes sent and received, and logical disk space used.
These views will help you quickly determine if a
https://azure.microsoft.com/blog/three-ways-to-get-notified-about-azure-service-issues/Preparing for the unexpected is part of every IT professional’s and developer’s job. Although rare, service issues like outages and planned maintenance do occur. There are many ways to stay informed, but we’ve identified three effective approaches that have helped READ MORE
Without the right tools and approach, cloud optimization can be a time-consuming and difficult process. There is an ever growing list of best practices to follow, and it’s constantly in flux as your cloud workloads evolve. Add the challenges and emergencies you face on a day-to-day basis, and it’s easy to understand why it’s hard to be proactive about ensuring your cloud resources are running optimally.
Azure offers many ways to help ensure that you’re running your workloads optimally and getting the most out of your investment.
Three kinds of optimization: organizational, architectural, and tactical
One way to think about these is the altitude of advice and optimization offered: organizational, architectural, or tactical.
At the tactical or resource level, you have Azure Advisor, a free Azure service that helps you optimize your Azure resources for high availability, security, performance, and cost. Advisor scans your resource usage and configuration and provides over 100 personalized recommendations. Each recommendation includes inline actions to make remediating your cloud resource optimizations fast and easy.
At the other end of the spectrum is Azure Architecture Center, a collection of free guides created by Azure experts to help you understand organizational and architectural best practices and
Whether you’re a new student, thriving startup, or the largest enterprise you have financial constraints and you need to know what you’re spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.
We’re always looking for ways to learn more about your challenges and how Cost Management can help you better understand how and where you’re accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:
Expanded general availability (GA): Pay-as-you-go and Azure Government New preview: Manage AWS and Azure costs together in the Azure portal New getting started videos Monitor costs based on your pay-as-you-go billing period More comprehensive scheduled exports Extended date picker Share link to customized views Documentation updates
Let’s dig into the details…
Expanded general availability (GA): Pay-as-you-go and Azure Government
Azure Cost Management is now generally available for the following account types:
Enterprise Agreements (EA) Microsoft Customer Agreements (MCA) Pay-as-you-go (PAYG) and dev/test subscriptions
We’re excited to announce the public preview of Azure App Configuration, a new service aimed at simplifying the management of application configuration and feature flighting for developers and IT. App Configuration provides a centralized place in Microsoft Azure for users to store all their application settings and feature flags (a.k.a., feature toggles), control their accesses and deliver the configuration data where it is needed.
Eliminate hard-to-troubleshoot errors across distributed applications
Companies throughout industries are transforming into digital organizations in order to better serve their customers, foster tighter relationships and respond to competition faster. We have witnessed a rapid growth in the numbers of applications our customers have. Modern applications, particularly those running in a cloud, are typically made up of multiple components and distributed in nature. Spreading configuration data across these components often leads to hard-to-troubleshoot errors in production. When a company has a large portfolio of applications, these problems multiply very quickly.
With App Configuration, you can keep your application settings together so that:
You have a single consolidated view of all configuration data. You can easily make changes to settings, compare values, and perform rollbacks. You have numerous options to deliver these settings to your application, including injecting
Innovation at scale is a common challenge facing large organizations. A key contributor to the challenge is the complexity in coordinating the sheer number of apps and environments.
Integration tools, such as Azure Logic Apps, give you the flexibility to scale and innovate as fast as you want, on-premises or in the cloud. This is a key capability you need to have in place when migrating to the cloud, or even if you’re cloud native. Often, integration has been relegated as something to do after the fact. In the modern enterprise, however, application integration is something that has to be done in conjunction with application development and innovation.
An integration service environment is the ideal solution for organizations concerned about noisy neighbor issues, data isolation, or who need more flexibility and configurability than the core Logic Apps service offers.
Building upon the existing set of capabilities, we are releasing a number of new, exciting changes that make integration service environments even better, such as:
Faster deployment times by halving the previous provisioning time Higher throughput limits for an individual Logic App and connectors An individual Logic App can now run for up to a year (365 days)
DevOps is the union of people, processes, and products to enable the continuous delivery of value to end users. DevOps for machine learning is about bringing the lifecycle management of DevOps to Machine Learning. Utilizing Machine Learning, DevOps can easily manage, monitor, and version models while simplifying workflows and the collaboration process.
Effectively managing the Machine Learning lifecycle is critical for DevOps’ success. And the first piece to machine learning lifecycle management is building your machine learning pipeline(s).
What is a Machine Learning Pipeline?
DevOps for Machine Learning includes data preparation, experimentation, model training, model management, deployment, and monitoring while also enhancing governance, repeatability, and collaboration throughout the model development process. Pipelines allow for the modularization of phases into discrete steps and provide a mechanism for automating, sharing, and reproducing models and ML assets. They create and manage workflows that stitch together machine learning phases. Essentially, pipelines allow you to optimize your workflow with simplicity, speed, portability, and reusability.
There are four steps involved in deploying machine learning that data scientists, engineers and IT experts collaborate on:
Data Ingestion and Preparation Model Training and Retraining Model Evaluation Deployment
Together, these steps make up the Machine Learning pipeline. Below is