Category Archives : Monitoring

15

Jan

Azure Monitor logs in Grafana – now in public preview

We’re happy to introduce the new Grafana integration with Microsoft Azure Monitor logs. This integration is achieved through the new Log Analytics plugin, now available as part of the Azure Monitor data source.

The new plugin continues our promise to make Azure’s monitoring data available and easy to consume. Last year, in the v1 of this data source we exposed Azure Monitor metric data in Grafana. While you can natively consume all logs in Azure Monitor Log Analytics, our customers also requested to make logs available in Grafana. We have heard this request and partnered with Grafana to enable you to use OSS tools more on Azure.

The new plugin allows you to display any data available in Log Analytics, such as logs related to virtual machine performance, security, Azure Active Directory which has recently integrated with Log Analytics, and many other log types including custom logs.

How can I use it?

The new plugin requires Grafana version 5.3 or newer. After the initial data source configuration, you can start embedding Azure Monitor logs in your dashboards and panels easily, simply select the service Azure Log Analytics and your workspace, then provide a query. You can reuse any existing queries

Share

10

Jan

Best practices for alerting on metrics with Azure Database for MariaDB monitoring

On December 4, 2018 Microsoft’s Azure Database for open sources announced the general availability of MariaDB. This blog intends to share some guidance and best practices for alerting on the most commonly monitored metrics for MariaDB.

Whether you are a developer, a database analyst, a site reliability engineer, or a DevOps professional at your company, monitoring databases is an important part of maintaining the reliability, availability, and performance of your MariaDB server. There are various metrics available for you in Azure Database for MariaDB to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid/public cloud. Here are some example best practices on how you can use monitoring data on your MariaDB server and areas you can consider improving based on these various metrics.

Active connections

Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check

Share

20

Dec

Best practices for queries used in log alerts rules
Best practices for queries used in log alerts rules

Queries can start with either a table name like “search” or “union *” operators. These commands are useful during data exploration and for searching terms over the entire data model. However, these operators are not efficient for productization in alerts. Log alerts rules queries in Log Analytics and Application Insights should always start with table(s), this is to define a clear scope for the query execution to specific table(s). It also improves both query performance and relevance of the results. You can learn more by visiting our documentation, “Query best practices.”

Note that using cross-resource queries in log alerts rules is not considered inefficient although “union” operator is used. The “union” in cross-resource queries is scoped to specific resources and tables as shown in this example, while the query scope for “union *” is the entire data model.

Union app(‘Contoso-app1’).requests, app(‘Contoso-app2’).requests, workspace(‘Contoso-workspace1’).Perf

After data exploration and query authoring, you may want to create a log alert using that query. These examples show how you can modify queries and avoid “search” and “union *” commands.

Example 1

You want to create log alert on the following query.

search ObjectName == ‘Memory’ and (CounterName == ‘% Committed Bytes In Use’

Share

10

Dec

Deploying Apache Airflow in Azure to build and run data pipelines

Apache Airflow is an open source platform used to author, schedule, and monitor workflows. Airflow overcomes some of the limitations of the cron utility by providing an extensible framework that includes operators, programmable interface to author jobs, scalable distributed architecture, and rich tracking and monitoring capabilities. Since its addition to Apache foundation in 2015, Airflow has seen great adoption by the community for designing and orchestrating ETL pipelines and ML workflows. In Airflow, a workflow is defined as a Directed Acyclic Graph (DAG), ensuring that the defined tasks are executed one after another managing the dependencies between tasks.

A simplified version of the Airflow architecture is shown below. It consists of a web server that provides UI, a relational metadata store that can be a MySQL/PostgreSQL database, persistent volume that stores the DAG files, a scheduler, and worker process.

The above architecture can be implemented to run in four execution modes, including:

Sequential Executor – This mode is useful for dev/test or demo purpose. It serializes the operations and allows only a single task to be executed at a time. Local Executor – This mode supports parallelization and is suitable for small to medium size workload. It doesn’t support

Share

29

Nov

Time series analysis in Azure Data Explorer
Time series analysis in Azure Data Explorer

Azure Data Explorer (ADX) is a lightning fast service optimized for data exploration. It supplies users with instant visibility into very large raw datasets in near real-time to analyze performance, identify trends and anomalies, and diagnose problems.

ADX performs an on-going collection of telemetry data from cloud services or IoT devices. This data can then be analyzed for various insights such as monitoring service health, physical production processes, and usage trends. The analysis can be performed on sets of time series for selected metrics to find a deviation in the pattern of the metrics relative to their typical baseline patterns.

ADX contains native support for creation, manipulation, and analysis of time series. It empowers us to create and analyze thousands of time series in seconds and enable near real-time monitoring solutions and workflows. In this blog post, we are going to describe the basics of time series analysis in Azure Data Explorer.

Time series capabilities

The first step for time series analysis is to partition and transform the original telemetry table to a set of time series using the make-series operator. Using various functions, ADX then offers the following capabilities for time series analysis:

Filtering – Used for noise reduction,

Share

05

Nov

Best practices for alerting on metrics with Azure Database for PostgreSQL monitoring

Whether you are a developer, database administrator, site reliability engineer, or a DevOps professional, monitoring databases is an important part of maintaining the reliability, availability, and performance of your PostgreSQL server. There are various metrics available for you in Microsoft Azure Database for PostgreSQL to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud-native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid and public cloud. Here are some example best practices for using monitoring data on your PostgreSQL server, and areas you can consider improving based on these various metrics.

Active connections

Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check: If you notice that active connections are at 80 percent of the total limit for the past half hour, verify if this is expected based on the workload. If you think the load is expected, active connections limit can be increased by

Share

05

Nov

Best practices for alerting on metrics with Azure Database for MySQL monitoring

Whether you are a developer, database administrator, site reliability engineer, or a DevOps professional, monitoring databases is an important part of maintaining the reliability, availability, and performance of your PostgreSQL server. There are various metrics available for you in Microsoft Azure Database for MySQL to get insights on the behavior of the server. You can also set alerts on these metrics using the Azure portal or Azure CLI.

With modern applications evolving from a traditional on-premises approach to becoming more hybrid or cloud-native, there is also a need to adopt some best practices for a successful monitoring strategy on a hybrid and public cloud. Here are some example best practices on how you can use monitoring data on your MySQL server, and areas you can consider improving based on these various metrics.

Active connections

Sample threshold (percentage or value): 80 percent of total connection limit for greater than or equal to 30 minutes, checked every five minutes.

Things to check: If you notice that active connections are at 80 percent of the total limit for the past half hour, verify if this is expected based on the workload. If you think the load is expected, active connections limit can

Share

22

Oct

Seven best practices for Continuous Monitoring with Azure Monitor

Whether you are a developer, site reliability engineer, IT Ops specialist, program manager, or a DevOps practitioner monitoring is something you definitely care about! With modern applications evolving from an on-premises world to becoming more hybrid or microservices based, there is also a need to evolve skill sets and adopt some best practices for a successful monitoring strategy on a hybrid/public cloud.

Azure Monitor is Microsoft’s unified monitoring solution that provides full-stack observability across applications and infrastructure. Depending on the hat you are wearing at the moment, you can start with end-to-end visibility across the health of your resources, drill down to the most probable root cause of a problem, even to actual lines of code, fix the issue in your app or infrastructure, and re-deploy in a matter of minutes. If you have a robust monitoring pipeline setup, you should be able to find and fix issues way before it starts impacting your customers.

Continuous Monitoring

Many of you already know how Continuous Integration and Continuous Deployment (CI/CD) as a DevOps concept can help you deliver software faster and more reliably to provide continuous value to your users. Continuous Monitoring (CM) is a new follow-up concept where you can

Share

25

Sep

Azure Monitor alerting just got better!
Azure Monitor alerting just got better!

In March 2018, we announced the next generation of alerts in Azure. Since then, we have received overwhelming feedback from you appreciating the new capabilities and providing asks for the next set of enhancements. Today, I am happy to announce exciting new developments in Azure Monitor alerts.

One Alerts experience

The unified alerts experience in Azure Monitor just got better! We are introducing the unified experience in all major services in Azure, complemented with the One Metrics and One Logs experience to provide you quick access to these capabilities.

As part of the alerts experience, we’re introducing new ways to visualize and manage alerts, providing a bird’s eye view to all alerts across subscriptions by severity, a drill down view into all the alerts and a detailed view to examine each alert. This is complemented by Smart Groups (preview), which automatically consolidates multiple alerts into a single group using advanced AI techniques. Using these capabilities, you can troubleshoot issues in your environment quickly.

 

Expanded coverage

We are expanding alerting coverage to include more Azure services including web apps, functions and slots, custom metrics, and standard and webtest for Azure Monitor Application Insights.

The alerts experience now also

Share

25

Sep

A new way to send custom metrics to Azure Monitor
A new way to send custom metrics to Azure Monitor

In today’s world of modern applications, metrics play a key role in helping you understand how your apps, services, and the infrastructure they run on, are performing. They can help you detect, investigate, and diagnose issues when they crop up. To provide you this level of visibility Azure has made resource-level platform metrics available via Azure Monitor. However, many of you need to collect more metrics, and unlock deeper insights about the resources and applications you are running in your hybrid environment.

To accomplish this, you have already been able to send custom metrics from your apps via Application Insights SDKs. Today, we are happy to announce the public preview of custom metrics in Azure Monitor, now enabling you to submit metrics, collected anywhere, for any Azure resource. These metrics can be additional performance indicators, like Memory Utilization, about your resources; or business-related metrics emitted by your application, like page load times. As part of the unified metric experience in Azure Monitor, now you can:

Send custom metrics with up to 10 dimensions Categorize and segregate metrics using namespaces Leverage a unified set of metrics and alerts experiences via Azure Monitor Plot custom metrics alongside your resources’ platform metrics in

Share