SQL Server 2019 - What’s New?

sql server 2019 performance

During the 2018 Ignite conference, Microsoft released the public preview for SQL Server 2019. There are several enhancements that have been made to SQL Server release to help transform the Microsoft data platform, and more importantly, to improve the user experience.

Most of the improvements are linked to the relational database engine, especially since the BI stack development is no longer directly connected to the database engine release. This is more or less the same thing that happened with SQL Server 2017. In fact, there’s only one major feature being introduced, among a host of minor enhancements.

Database Performance Enhancements

Over the years, Microsoft has made a tradition of tweaking each release to improve performance. More often, these are seen either as updates that assist all users, while in some cases these only edge case features.

One of the notable optimizations was introduced in SQL Server 2017. As soon as a query is executed in a database, there’s a memory threshold that’s required for operations like data sorting to be done, hence delivering the required results to the user. The engine allocates a specific amount of memory to each query depending on the statistics that the data maintains.

For example, a query that will require 5 billion rows to be joined will certainly require more memory allocation than one that needs to join only 50 rows. There are instances where the statistics might not be correct, and as a result, this ends up in performance concerns or concurrency.

In SQL Server 2017, Microsoft found a fix for this problem, situations where continued execution of a query automatically adjusts the processing memory required to execute the query, depending on the runtime statistics of the execution before it. While this was a good idea, there was one challenge – this would only work for queries in batch execution mode. These are queries that must use a columnscore index.

The problem with columnscore index queries is that they are only ideal for an analytical workload, instead of transaction processing. Fast forward to SQL Server 2019, and Microsoft has made dynamic memory grants accessible for all queries.

Moving along with batch execution mode, the feature that can process huge chunks of data, in almost 1,000 rows and allows a speedy execution of aggregate functions like standard deviation, sums and averages were only available for columnscore indexes.

In SQL Server 2019, Microsoft has introduced batch mode over row store. Limited testing results for the early releases have been impressive, especially with test results for aggregation queries.

One common data warehouse performance concern arises connected to distinct counts for a single item. In a database, generating a distinct list is typically very expensive, especially when dealing with values on a very large table. You can see the cost replicated in BI operations because of the need to present a report for things like the number of products that each customer buys, or the sales records for each product. In SQL Server 2019, Microsoft has added a unique feature, “approximate count distinct.” This feature makes use of statistical functions to provide near-accurate data when in use and delivers results faster.

The adoption of persistent memory is one of the other hardware and performance benefits that Microsoft has introduced. This is an effective storage feature at the block level. It’s effective because it writes at the RAM speed.

In the database realm, this is a special feature. This is because more often database management software is hindered based on the underlying storage speed. In SQL Server 2016, Microsoft started offering support for persistent memory (NV-DIMM) as it’s referred to at the end of the transaction log. Building on this, any writings to a database should be handled faster in subsequent releases.

Microsoft has since extended support for these devices in the release of SQL Server 2019, especially for Optane DC NV-DIMMs and on the Windows Server 2019. With this extension, any database project can now be stored on persistent memory like normal block-based storage.

Microsoft didn’t leave out Linux users when rolling out SQL Server. For Linux, Microsoft created a unique enhancement that allows you to map database files to the memory directly. With this, there’s no need for kernel calls to the storage stack, which is memory intensive.

The storage engines on Linux and Microsoft might not be the same, but it’s increasingly evident that Microsoft is working towards the creation of databases that offer the best performance, by living fully in persisted RAM.


  • Always Encrypted Using Secure Enclaves

Always Encrypted offers protection for all sensitive data both in memory and over the wire through decryption and encryption at each endpoint. This, however, creates processing challenges from time to time, including the inability to filter or perform computations. For this reason, the entire data set must be sent across before a range search, for example, can be performed.

What is an enclave? This is a protected memory segment that handles the delegation of filtering and computations. In a Windows database, enclave security is based on virtualization. In this case, the data is encrypted in the engine and remains encrypted. However, when it’s within the enclave, it can still be decrypted or encrypted. All you need to do is add ENCLAVE_COMPUTATIONS to the master key. You can simply check the “allow enclave computations” checkbox in SSMS to make this happen.

This allows you to encrypt data almost immediately. This is faster and more efficient than the former way of encryption which used an application or the Set-SqlColumnEncryption cmdlet, to move all the data from the database, encrypt the database and then send back all the data.

Given this update, you can perform range searches, wildcard searches, orders and so forth. You can also perform in-place encryption within the queries without worrying about security. This is because the enclave is designed to allow decryption and encryption on the same server. Within the enclave, you can also execute an encryption key rotation.

For many organizations that have been struggling with encryption and other data management concerns, this is a game changer. There’s still some work going on to perfect all the optimizations, especially those that are not enabled by default. To learn how to turn them on, navigate to the topic and enable rich computations.

  • Certificate Management in Configuration Manager

Managing TLS and SSL certificates has always been a challenge for a lot of database managers. Usually, they end up performing lots of tedious work and running unique scripts simply to maintain or deploy certificates across the entire enterprise.

In SQL Server 2019, updates have been made to SQL Server Configuration Manager. This allows you to validate and view any of the certificates of interest easily, find those that are almost expiring and synchronize the deployment of certificates in all the replicas of an Availability Group (from the primary), or all the nodes in a Failover Cluster Instance (from the active node).

These operations should work just fine for anyone using an older version of SQL Server, especially if you run them from a SQL Server 2019 version of your SQL Server Configuration Manager.

  • Built-In Data Classification and Auditing

For SSMS 17.5, SQL Server added the functionality for data classification within the SSMS. This enables users to identify columns that have sensitive information or those that might not conform to the compliance standards in use such as GDPR, PCI, SOC, and HIPAA.

This wizard will then run an algorithm that identifies and reports columns that might have such challenges, but you’re still free to add some on your own. From here you can make adjustments to the suggestions, or remove some of the columns you are uncomfortable with from your list. The classifications created are then stored through extended properties. This is an SSMS report that uses similar information to show columns that have already been identified. Keep in mind that the properties might not be visible outside this report.

A new command was created for this metadata in SQL Server 2019. The command, ADD SENSITIVITY CLASSIFICATION is also available in the Azure SQL Database. What this does is that it confers the ability to perform the same procedure as you would with the SSMS wizard. However, the information will not be stored as an extended property. Other than that, the data is audited in an XML column, data_sensitivity_information. All the information that would have been accessed during the audited event is contained here.


  • Lightweight Profiling on by Default

This enhancement has been around for a while and experienced several tweaks down the line. It was first introduced with SQL Server 2014 as DMV sys.dm_exec_query_profiles. Their role is to help users who are running queries to collect diagnostic information on all operators involved in the query. With this information, it’s possible to determine the operators who performed the most tasks, and why. This is ideal for auditing.

Even if a user is not using this query, they would still be able to get a glimpse into the data for whichever session they are interested in, as long as STATISTICS PROFILE or STATISTICS XML was enabled. Alternatively, this is also possible through the extended event, query_post_execution_showplan. However, the problem with this event is that it usually strains performance.

In Management Studio 2016, functionality was added, enabling it to show real-time data movement in an execution plan according to the information from the DMV. Therefore, regarding troubleshooting, this was a very powerful tool. Plan Explorer is another option that comes in handy for replay and live capabilities when visualizing data through query duration.

In SQL Server 2016 SP1, it was possible to allow a lightweight version of the data collection process for all the sessions. To do this, execute the extended event query_thread_profile or use the trace flag 7412. This allows you to access important information about a session of interest without necessarily having to explicitly enable anything in the session. This applies more so for anything that has a negative effect on performance.

For SQL Server 2019, the thread profile is already enabled by design. You don’t need, therefore, to have an extended session or trace flag running in an individual query. For all concurrent sessions, you can easily look at the DMV data at any given time. This can also be turned off using the LIGHTWEIGHT_QUERY_PROFILING database scoped configuration. However, the syntax cannot work with CTP 2.0, but there are plans to have it fixed in the new release.

  • Clustered Columnstore Index Statistics Available in Clone Databases

To clone a database in the current SQL Server models, you will only get the original statistical object from the clustered columnstore index. If there were updates made to the table after creation, these will not be affected.

In case you use the clone to tune queries or any other performance tests that need cardinality estimates, the use cases will not be valid. The workarounds for this limitation are not very easy to remember, and they might also be very expensive.

The updated stats are available automatically in the clone in SQL Server 2019. Therefore, you are able to test any query scenarios and find a workable plan depending on the actual statistics, without having to manually run STATS_STREAM on each table.

  • New Function to Retrieve Page Info

For a very long time, DBCC PAGE and DBCC IND have been used to collect information on pages that make up a table, index or partition. However, these are unsupported and undocumented commands. Automating solutions on problems which need more than one page or index might be a very tedious process.

After that, sys.dm_db_database_page_allocations was introduced. This is a dynamic management function (DMF) which returns a set that represents all pages in the object in question. The function creates a predicate pushdown issue that might prove to be a concern with larger tables. For this to collect information on one page, it will have to read the whole structure, and this can be very prohibitive.

SQL Server has also brought a new DMF, sys.dm_db_page_info. This DMF returns all information on a given page without unnecessary overheads to the function. To use this function in the current builds, you will have to know the page number that you are looking for beforehand. This might be intentional, but it’s a performance guarantee.

For more information on Microsoft SQL Server, or to speak to a SQL licensing expert, contact Royal Discount at 1-877-292-7712 for a free consultation.