Many organizations seek to do more with their data than pump out dashboards and reports. Applying advanced analytical approaches such as machine learning is an essential arena of knowledge for any data professional. While database administrators (DBAs) don’t necessarily have to become data scientists, they should have a deep understanding of the machine learning technologies at their disposal and how to use them in collaboration with other domain experts.
For those of us who work with SQL Server, there are many cool new capabilities to get familiar with in SQL Server 2019. At the heart of it all is a solution called Big Data Clusters, allowing you to create scalable clusters of SQL Server, Apache Spark, and HDFS containers running on Kubernetes.
That means flexibility in the ways you access the data and relational data side-by-side. Through the cluster, you can query data from external sources. You can also store big data in HDFS managed by SQL Server. At the end of the day, this makes more of your data available, faster and more easily, for machine learning, artificial intelligence, and other advanced analytical tasks.
SQL Server 2019 also provides expanded machine learning capabilities built in. It adds commonly requested features related to
Organizations that embraced the option to run Microsoft SQL Server 2017 on Linux have been looking forward to the release of SQL Server 2019. Regardless of which operating system (OS) you choose, it’s the same SQL Server database code, and includes even more of the same features and services as the Windows release. This introductory blog post about running Microsoft SQL Server 2019 on Linux provides basic information database professionals need to know before upgrading or migrating SQL Server onto Linux.
Supported Linux platforms
Microsoft SQL Server 2019 is tested and supported to run on several Linux distribution platforms:
Red Hat Enterprise Linux (RHEL)SUSE Linux Enterprise Server (SLES)Ubuntu
Along with the above versions of Linux distributions, SQL Server 2019 is supported in a container scenario using a Docker image. Running a SQL Server database inside a Docker engine with Linux offers more flexibility, faster recovery, and quicker deployments, including deployments into the Azure cloud. For those becoming familiar with Linux, Docker for Windows or Mac gives you the option to run a Docker engine on your workstation with SQL Server 2019 on Linux.
Along with Docker technology, orchestration can be achieved, both managing and deploying SQL Server containers on Linux using Red Hat Open shift
The days when a database administrator (DBA) could specialize solely in a single database technology are rapidly ending. Today, we’re much more likely than ever before to be asked to bring together many types of data from diverse sources. Although specialization still has its place, having the knowledge and tools at our disposal to cross those boundaries makes us much more useful.
That’s one reason to get excited about the continued expansion of the PolyBase technology introduced in SQL Server 2016, which has become much more powerful in the release of SQL Server 2019.
Before PolyBase, when trying to use external data sources from SQL Server, you either had to transfer data from one source to another or query both sources and then write custom logic to joining and integrate the data at the client level. PolyBase simplifies the process of reading from external data sources. It does so by enabling your SQL Server instance to process Transact-SQL (T-SQL) queries that access both external data and relational data inside the instance.
Initially, PolyBase targeted Apache Hadoop and Azure Blob Storage. The ability to target big data inside Hadoop nodes expanded the ability to do modern analytics seamlessly from a SQL Server platform. No
With SQL Server 2017, Microsoft entered the world of multi-OS platform support for SQL Server. For many technical professionals, the ability to run SQL Server on the same open source operating system as the rest of the application stack is not just a goal, but a dream that Microsoft made come true. With the release of SQL Server 2019, the inclusion of Linux now includes new features, support, and capabilities.
As a long-time Linux database administrator (DBA), in this post I’ll share my top five focus areas for the Microsoft data professional to become knowledgeable of as they embark on the brave new world of Linux.
1. Embrace the command line
Yes, there is a graphical user interface, (GUI) for Linux, but the command line rules in Linux. We can’t stress enough how important it is to learn how to navigate directories (cd), change permissions (chmod), and list contents (ls). Your best friend will become the -h argument to any command to get the help menu for whatever you’re attempting.
It will be essential to know how to install and update your server and applications, (apt-get, yum, and zypper) as it may be your responsibility not only to just perform this task for the
With the release of SQL Server 2019 on Linux, Microsoft introduced persistent memory (PMEM) support on Linux. This is an exciting development, as previous versions of SQL Server on Linux didn’t support PMEM. Let’s look at how to configure the PMEM for SQL Server on Linux.
SQL Server 2016 introduced support for non-volatile DIMMs and an optimization called Tail of the Log Caching on NVDIMM.These leveraged Windows Server direct access to a persistent memory device in DAX mode to reduce the number of operations needed to harden a log buffer to persistent storage.
SQL Server 2019 extends the support for PMEM devices to Linux, providing full enlightenment of data and transaction logs placed on PMEM. Enlightenment is a way to access the storage device using efficient user-space memcpy() operations. Rather than going through the file system and storage stack, SQL Server leverages DAX support on Linux to place data directly into the device. This helps to reduce latency.
Enable enlightenment of database files
The first step to enabling enlightenment of database files in SQL Server on Linux is to configure the devices. In Linux, use the ndctl utility to configure PMEM device and create a namespace.
ndctl create-namespace -f -e namespace0.0 –mode=fsdax* –map=mem
You can verify
SQL Server 2019 Big Data Clusters is a scale-out, data virtualization platform built on top of the Kubernetes container platform. This ensures a predictable, fast, and elastically scalable deployment, regardless of where it’s deployed. In this blog post, we’ll explain how to deploy SQL Server 2019 Big Data Clusters to Kubernetes.
First, the tools
Deploying Big Data Clusters to Kubernetes requires a specific set of client tools. Before you get started, please install the following:
azdata:Deploys and manages Big Data Clusters.kubectl: Creates and manages the underlying Kubernetes cluster.Azure Data Studio:Graphical interface for using Big Data Clusters.SQL Server 2019 extension:Azure Data Studio extension that enables the Big Data Clusters features.Choose your Kubernetes
Big Data Clusters is deployed as a series of interrelated containers that are managed in Kubernetes. You have several options for hosting Kubernetes, depending on your use case, including:
Azure Kubernetes Service (AKS): You can use the Azure portal to deploy Azure Kubernetes Service. Azure Kubernetes Service allows you to deploy a managed Kubernetes cluster in Azure, all you manage and maintain are the agent nodes. You don’t even have to provision your own hardware.Multiple Linux machines: Kubernetes can also be deployed to multiple Linux machines, physical or virtual. This is a great option
Last September at Microsoft Ignite 2018, Rohan Kumar, Corporate Vice President of Azure Data, announced the general availability of Azure Data Studio. This month, a little over a year later, Azure Data Studio took PASS Summit 2019 by storm, from keynote demos to community sessions to customer conversations at the Microsoft booth, we experienced overwhelming support for the immense growth of Azure Data Studio since its general availability announcement.
The conference kicked off with an exciting keynote by Rohan, who announced several general availability and preview milestones for Azure Data Studio and SQL Server, including the general availability of SQL Server 2019, the preview of Azure SQL Database Edge, and the preview of Azure Arc. Rohan’s keynote was enriched by demos from team members, with several using Azure Data Studio to demonstrate the power of the newly announced capabilities of SQL Server. My favorite example is Asad Khan’s, Partner Director of Program Management for SQL Server and Azure SQL, demo of a connected factory powered by SQL. In his demo, he highlighted Notebooks in Azure Data Studio while showing how devices collecting data at the edge can stream data into a SQL Server 2019 Big Data Cluster. Then, he visualized the
DevOps, the cloud, and new database technologies mean our jobs as database administrators (DBAs) are changing at an ever-faster pace. If you’re fascinated by data and all the things you can do with it, it’s a thrilling time to be in the business. Here are five of the skills we see as essential parts of the modern DBA’s toolkit.
Expertise with multiple technologies: The one-size-fits-all approach to databases is fading. Just as application developers are moving toward a microservices model that focuses on the right tool for the job, organizations are choosing databases according to specific workload needs. The more you know about Hadoop, NoSQL, graph, and other technologies, the better-positioned you will be to make a positive contribution to the conversation.Collaboration: Speaking of conversations, DBAs will increasingly need to become contributing members of application teams rather than siloed specialists off in their own corners. DevOps tend to break down the barriers between IT functions. Understanding how applications work, and, even better, how they deliver business value, puts you in a position to be a creative problem solver and all-around data expert.Data science skills: Machine learning and AI are some of the fastest growing uses of data in the enterprise today.
In the most recent releases, SQL Server went beyond relational data and enabled support for graph data, R, and Python machine learning, while making SQL Server available on Linux and containers in addition to Windows. At the same time, organizations are challenged with the amount of data stored in different formats, in silos, and the expertise required to extract value out of the data. Through enhancements in data virtualization and platform management, Microsoft SQL Server 2019 Big Data Clusters provides an innovative and integrated solution to overcome these difficulties. It incorporates Apache Spark and HDFS in addition to SQL Server, on a platform built exclusively using containerized applications, designed to derive new intelligent insights out of data.
Modernize your data estate with a scalable data virtualization and analytics platform
Data integration strategies are based on extract, transform, and load (ETL) results in data duplication and transformations that diminish data quality, higher maintenance, and security risks. SQL Server 2019 has a new approach to data integration called data virtualization across disparate and diverse data sources, without moving data. Out-of-the-box connectors for data sources like Oracle, Teradata or MongoDB help you keep the data in place and secure, with less maintenance and storage cost.
https://cloudblogs.microsoft.com/sqlserver/2019/11/07/new-in-azure-synapse-analytics-cicd-for-sql-analytics-using-sql-server-data-tools/Source: https://cloudblogs.microsoft.com/sqlserver/2019/11/07/new-in-azure-synapse-analytics-cicd-for-sql-analytics-using-sql-server-data-tools/ At Microsoft Ignite 2019,weannouncedAzure Synapse Analytics,a major evolution of Azure SQL Data Warehouse.Thesame industry leading data warehousenow provides a whole new level of performance, scale, and analytics capabilities.One of these capabilities is SQL Analytics, which providesRead More