This blog is a continuation of a series of blog posts to share best practices for improving performance and scale when using Azure Database for PostgreSQL service. In this post, we will focus on the benefits of using connection pooling and share our recommendations to improve connection resiliency, performance, and scalability of applications running on Azure Database for PostgreSQL. If you have not read the previous performance best practice blogs in the series, we would highly recommend reading the following blog posts to learn, understand, and adopt the recommended best practices for using Azure Database for PostgreSQL service.
Performance best practices for using Azure Database for PostgreSQL Performance updates and tuning best practices for using Azure Database for PostgreSQL Performance troubleshooting best practices using Azure Database for PostgreSQL features
In PostgreSQL, establishing a connection is an expensive operation. This is attributed to the fact that each new connection to the PostgreSQL requires forking of the OS process and a new memory allocation for the connection. As a result, transactional applications frequently opening and closing the connections at the end of transactions can experience higher connection latency, resulting in lower database throughput (transactions per second) and overall higher application latency. It
With Azure DevOps Projects we want to make it is easy for you to set up a fully functional DevOps pipeline tailored to the development language and application platform you want to leverage.
We have been making continuous enhancements to Azure DevOps Projects and in the latest deployment now available to all customers, we have added support for Azure Cosmos DB and Azure Functions as target destinations for your application. This builds on the existing Azure App Service, Azure SQL Database, and Azure Kubernetes Service (AKS) support.
The support of Azure Cosmos DB in Azure DevOps Projects means that you will now be able to create a skeleton two tier Node.js application backed by Azure Cosmos DB in just a few clicks. Azure DevOps Projects creates all the scaffolding for your pipeline to give you everything you need to develop, deploy, and monitor your application including:
A Git code repository hosted in Azure Repos with a skeleton Node.js application A CI/CD pipeline in Azure Pipelines for deploying the database tier to Azure Cosmos DB and the web-tier on Azure Web Apps for containers, Azure Kubernetes Service, or as a Windows Web App in Azure Provisioning all the Azure resources in
As Julia White mentioned in her blog today, we’re pleased to announce the general availability of Azure Data Lake Storage Gen2 and Azure Data Explorer. We also announced the preview of Azure Data Factory Mapping Data Flow. With these updates, Azure continues to be the best cloud for analytics with unmatched price-performance and security. In this blog post we’ll take a closer look at the technical capabilities of these new features.
Azure Data Lake Storage – The no compromise Data Lake
Azure Data Lake Storage (ADLS) combines the scalability, cost effectiveness, security model, and rich capabilities of Azure Blob Storage with a high-performance file system that is built for analytics and is compatible with the Hadoop Distributed File System. Customers no longer have to tradeoff between cost effectiveness and performance when choosing a cloud data lake.
One of our key priorities was to ensure that ADLS is compatible with the Apache ecosystem. We accomplished this by developing the Azure Blob File System (ABFS) driver. The ABFS driver is officially part of Apache Hadoop and Spark and is incorporated in many commercial distributions. The ABFS driver defines a URI scheme that allows files and folders to be distinctly addressed in the
Expanding reserved instances discounts to classic virtual machines, Azure Cloud Services, and Dev/Test subscriptions
Today, we are excited to announce two new Azure Reserved VM Instances’ (RI) features to provide our customers with additional savings and purchase controls.
Since launch, we have continued to add multiple features such as instance size flexibility, RIs for US Government regions, purchase recommendations, and RIs in the Cloud Solution Provider (CSP) channel. We have also extended the capability to provide reservation discounts on SQL Databases and Cosmos DB.
Features that we are launching today:
1. Classic VMs and Cloud Services users can now benefit from the RI discounts
RIs with the instance size flexibility option enabled will now apply the discount to both classic VMs and cloud services. For cloud services, the reservation discount applies only to the compute cost. When the reservation discount is applied to cloud services, the usage charges will be split into compute charges (Linux meter) and a cloud services charges (cloud services management meter). Learn how the reservation discount applies to Cloud Services.
Newly purchased RIs or existing RIs can now be applied to your
Data is essential to your business. The ability to unblock business insights more efficiently can be a key competitive advantage to the enterprise. As data grows in volume, variety, and velocity, organizations need to bring together a continuously increasing set of diverse datasets across silos in order to perform advanced analytics and uncover business opportunities. The first challenge to building such big data analytics solutions is how to connect and extract data from a broad variety of data stores. Azure Data Factory (ADF) is a fully-managed data integration service for analytic workloads in Azure, that empowers you to copy data from 80 plus data sources with a simple drag-and-drop experience. Also, with its flexible control flow, rich monitoring, and CI/CD capabilities you can operationalize and manage the ETL/ELT flows to meet your SLAs.
Today, we are excited to announce the release of a set of new ADF connectors which enable more scenarios and possibilities for your analytic workloads. For example, you can now:
Ingest data from Google Cloud Storage into Azure Data Lake Gen2, and process using Azure Databricks jointly with data coming from other sources. Bring data from any S3-compatible data storage that you may consume from third party
Our goal on the Azure Stream Analytics team is to empower developers and make it incredibly easy to leverage the power of Azure to analyze big data in real-time. We achieve this by continuously listening for feedback from our customers and ship features that are delightful to use and serve as a tool for tackling complex analytics scenarios. We are excited to share the public preview of Azure SQL Database as a reference data input for Stream Analytics, which is the most requested feature on UserVoice!
Typical scenarios for reference data
Reference data is a dataset that is static or slow changing in nature which you can correlate with real-time data streams to augment the data. Stream Analytics leverages versioning of reference data to augment streaming data by the reference data that was valid at the time the event was generated.
An example scenario would be storing currency exchange rates in Azure SQL Database which is regularly updated to reflect market trends, and then converting a stream of billing events in different currencies to a standard currency.
In IoT scenarios, you could have millions of IoT devices emitting a stream of events with critical values like temperature and pressure being
Today, Microsoft is announcing new intelligent cloud and intelligent edge capabilities for U.S. government customers. These new capabilities will help government customers uniquely address “the tactical edge”—or, a dependence on information systems and connectivity in harsh scenarios or other situations where users have critical data availability, integrity, and transparency needs.
As U.S. government agencies support missions around the world, in remote locations, and beyond the reach of standard infrastructure, new technology is required for mission success. Microsoft offers a comprehensive portfolio designed to bring data analysis and insight to the tactical edge. Azure Stack and our Data Box family of products help government agencies with remote operations access the information they need to make decisions at the edge, along with access to the full range of cloud data analytics as connectivity allows.
Just last year we announced the integration of Azure Stack with Azure Government cloud, which unlocks a wide range of hybrid cloud use cases for government customers. By connecting the tactical edge to Azure Government, the mission-critical cloud for U.S. government customers and their partners, federal, state, and local government agencies can now operate with full regulatory compliance and the most up-to-date edge capabilities.
To that end, today,
Since our preview announcement, hundreds of customers have been enabling Query Store to provide insight on query performance. We’re excited to share the general availability of Query Store worldwide for Azure SQL Data Warehouse.
Query Store automatically captures a history of queries, plans, and runtime statistics and retains them for your review when monitoring your data warehouse. Query Store separates data by time windows so you can see database usage patterns and understand when plan changes happen.
Top three reasons to use Query Store right now
1. Find the full text of any query: Using the sys.query_store_query and sys.query_store_query_text catalog views, you can see the full text of queries executed against your data warehouse over the last 7 days.
SELECT q.query_id , t.query_sql_text FROM sys.query_store_query q JOIN sys.query_store_query_text t ON q.query_text_id = t.query_text_id;
2. Finding your top executing queries: Query Store tracks all query executions for your review. On a busy data warehouse, you may have thousands or millions of queries executed daily. Using the Query Store catalog views, you can get the top executing queries for further analysis:
SELECT TOP 10 q.query_id [query_id] , t.query_sql_text [command] , SUM(rs.count_executions) [execution_count] FROM sys.query_store_query q JOIN sys.query_store_query_text t ON q.query_text_id
This blog is co-authored by Parikshit Savjani, Senior Program Manager and Rachel Agyemang, Program Manager, Microsoft Azure.
We are excited to announce Public Preview availability of Read Replicas for Azure Database for PostgreSQL.
Azure Database for PostgreSQL now supports continuous, asynchronous replication of data from one Azure Database for PostgreSQL server (the “master”) to up to five Azure Database for PostgreSQL servers (the “read replicas”) in the same region. This allows read-heavy workloads to scale out horizontally and be balanced across replica servers according to users’ preferences. Replica servers are read-only except for writes replicated from data changes on the master. Stopping replication to a replica server causes it to become a standalone server that accepts reads and writes.
Key features associated with this functionality are:
Turn-key addition and deletion of replicas. Support for up to five read replicas in the same region. The ability to stop replication to any replica to make it a stand-alone, read-write server. The ability to monitor replication performance using two metrics, Replica Lag and Max lag across Replicas.
For more information and instructions on how to create and manage read replicas, see the following articles:
Exploring data is like solving a puzzle. You create queries and receive instant satisfaction when you discover insights, just like adding pieces to complete a puzzle. Imagine you have to repeat the same analysis multiple times, use libraries from an open-source community, share your steps and output with others, and save your work as an artifact. Notebooks helps you create one place to write your queries, add documentation, and save your work as output in a reusable format.
Jupyter Notebook allows you to create and share documents that contain live code, equations, visualizations, and explanatory text. Its includes data cleaning and transformation, numerical simulation, statistical modeling, and machine learning.
We are excited to announce KQL magic commands which extends the functionality of the Python kernel in Jupyter Notebook. KQL magic allows you to write KQL queries natively and query data from Microsoft Azure Data Explorer. You can easily interchange between Python and KQL, and visualize data using rich Plot.ly library integrated with KQL render commands. KQL magic supports Azure Data Explorer, Application Insights, and Log Analytics as data sources to run queries against.
Use a single magic “%kql” to run a single line query, or use cell magic “%%kql” to