Statistics update 2

Previous post about statistics was talking about updating them , base on some value, value that exists in a utility table.

The present post is referring also about update statistics, but now base on a percentage value, threshold . Maybe this is more flexible . This value could also be present in a utility table , it’s up to anyone how to have it.

In my case, I use a value of 5.0, so that I will not update all objects, only some of them. Like I said, this value, could be extracted from a utility table , and could be unique for an particularly statistics object (stats_id) , or an object( object_id ) or database…

 

The starting point is the calculation of the percentage value :  sp.modification_counter/sp.rows

where :

  •  sp.modification_count = actual value of modification
  • sp.rows = number of rows in that object

The actual formula is :

CAST((sp.modification_counter/(sp.rows * 1.0)) *100.0  AS decimal(6,4))

 

This query will list all the statistics of the object

SELECT
OBJECT_SCHEMA_NAME(s.object_id)
+ ‘.’ + OBJECT_NAME(s.object_id) as objectName
, s.name as statisticsName
,s.object_id as objectID
,s.stats_id as statsID
,sp.modification_counter
,sp.rows as noRows
,CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) as modification_Percent
,sp.last_updated
,sc.colNames
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
CROSS APPLY(SELECT STRING_AGG(c.name,’,’) as colNames
FROM sys.stats_columns as sc
INNER JOIN sys.columns as c
ON sc.column_id = c.column_id
and sc.object_id = c.object_id
WHERE sc.object_id = s.object_id
and sc.stats_id = s.stats_id
) as sc
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)

 

objectName statisticsName objectID statsID modification_counter noRows modification_Percent last_updated colNames
Sales.Orders PK_Sales_Orders 1154103152 1 3966 69629 5.6959 2016-06-02 10:39:54.2533333 OrderID
Sales.Orders FK_Sales_Orders_CustomerID 1154103152 2 0 73595 0.0000 2018-07-25 10:52:18.0733333 CustomerID
Sales.Orders FK_Sales_Orders_SalespersonPersonID 1154103152 3 0 73595 0.0000 2018-07-25 10:52:18.4233333 SalespersonPersonID
Sales.Orders FK_Sales_Orders_PickedByPersonID 1154103152 4 4751 71018 6.6899 2016-06-02 10:43:25.6400000 PickedByPersonID
Sales.Orders FK_Sales_Orders_ContactPersonID 1154103152 5 2012 71583 2.8107 2016-06-02 10:45:05.4033333 ContactPersonID
Sales.Orders _WA_Sys_0000000E_44CA3770 1154103152 6 0 73595 0.0000 2018-07-25 10:53:22.7400000 PickingCompletedWhen
Sales.Orders _WA_Sys_0000000A_44CA3770 1154103152 7 3495 70100 4.9857 2016-06-02 10:41:14.3700000 IsUndersupplyBackordered

Adding the  additional predicate on the percentage …

WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) > CAST(5.0 AS decimal(6,4))

will have this records outputted :

objectName statisticsName modification_counter noRows modification_Percent
Sales.Orders PK_Sales_Orders 3966 69629 5.6959
Sales.Orders FK_Sales_Orders_PickedByPersonID 4751 71018 6.6899

Now, just prepare the dynamic sql for update

 
DECLARE @nvc_SQLText AS nvarchar(4000) = N”;

SELECT
@nvc_SQLText = STRING_AGG(N’UPDATE STATISTICS ‘
+ OBJECT_SCHEMA_NAME(s.object_id)
+ N’.’ + OBJECT_NAME(s.object_id)
+N'(‘ + s.name +N’)’
+ N’ WITH FULLSCAN;’ + char(10) ,N”)
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) > CAST(5.0 AS decimal(6,4))

SELECT @nvc_SQLText

 

output:

UPDATE STATISTICS Sales.Orders(PK_Sales_Orders) WITH FULLSCAN;
UPDATE STATISTICS Sales.Orders(FK_Sales_Orders_PickedByPersonID) WITH FULLSCAN;

 

and run it :

EXEC sp_executeSQL @nvc_SQLText;

 

 

Now, check if you have some objects over the threshold :

 
SELECT
OBJECT_SCHEMA_NAME(s.object_id)
+ ‘.’ + OBJECT_NAME(s.object_id) as objectName
,s.name as statisticsName
,sp.modification_counter
,sp.rows as noRows ,CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) as modification_Percent
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
CROSS APPLY(SELECT STRING_AGG(c.name,’,’) as colNames
FROM sys.stats_columns as sc
INNER JOIN sys.columns as c
ON sc.column_id = c.column_id
and sc.object_id = c.object_id
WHERE sc.object_id = s.object_id
and sc.stats_id = s.stats_id
) as sc

WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4))
> CAST(5.0 AS decimal(6,4))

 

output :

No rows.

#t-sql #sqlserver #statistics #update

 

 

S

 

 

 

 

Statistics update

How about a home made script that is checking the statistics modification counter, and base on this value, check it with a company / object threshold and if this value is above that threshold, trigger an update of that particular statistics.

For this we must have a utility table that hold the threshold for which will trigger the update. This is kept on object_id and stat_id ( for a more flexibility) but could be only one per database or one per object_id …

ex:  I have a statistics  : _WA_Sys_0000000E_44CA3770 , if this has a value = 4563 and the threshold it’s like 3000. In this case we should trigger , for this statistics, an update.

 

Here are the steps:

  1. – List the statistics (for indexes and for columns) of an object. In this case for Sales.Orders
  2. – Show the field that holds the modification counter associated
    with this statistics
  3. – Do an update/delete in a transaction with rollback – for field  PickingCompletedWhen
  4. – Create utility table
  5. – Create dynamic string to run the update statistics
  6. – Check the value after update statistics

 

I will start with step 3   , the utility table.This should be save in a “utility” database.

use db_workspace
go

DROP TABLE IF EXISTS dbo.utility_stats;
GO

CREATE TABLE dbo.utility_stats
(
uts_id INT IDENTITY(1,1) NOT NULL
,uts_object_id INT NOT NULL
,uts_stats_id INT NOT NULL
,uts_threshold INT NULL
);
GO

ALTER TABLE dbo.utility_stats
ADD CONSTRAINT pk_utility_stats__uts_object_id_uts_stats_id
PRIMARY KEY CLUSTERED(uts_object_id,uts_stats_id);
GO

 

Step 1: List the statistics (for indexes and for columns) of an object.
In this case for Sales.Orders

SELECT
OBJECT_SCHEMA_NAME(s.object_id)
+ ‘.’ + OBJECT_NAME(s.object_id) as objectName
, s.name as statisticsName
,s.object_id as objectID
FROM sys.stats as s
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)

objectName statisticsName objectID
Sales.Orders PK_Sales_Orders 1154103152
Sales.Orders FK_Sales_Orders_CustomerID 1154103152
Sales.Orders FK_Sales_Orders_SalespersonPersonID 1154103152
Sales.Orders FK_Sales_Orders_PickedByPersonID 1154103152
Sales.Orders FK_Sales_Orders_ContactPersonID 1154103152
Sales.Orders _WA_Sys_0000000E_44CA3770 1154103152
Sales.Orders _WA_Sys_0000000A_44CA3770 1154103152

Step 2: Show the fields that holds the modification counter associated
with this statistics

SELECT
OBJECT_SCHEMA_NAME(s.object_id)
+ ‘.’ + OBJECT_NAME(s.object_id) as objectName
, s.name as statisticsName
,s.object_id as objectID
,s.stats_id as statsID
,sp.modification_counter
,sp.last_updated
,sc.colNames
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
CROSS APPLY(SELECT STRING_AGG(c.name,’,’) as colNames
FROM sys.stats_columns as sc
INNER JOIN sys.columns as c
ON sc.column_id = c.column_id
and sc.object_id = c.object_id
WHERE sc.object_id = s.object_id
and sc.stats_id = s.stats_id
) as sc
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)

objectName statisticsName objectID statsID modification_counter last_updated colNames
Sales.Orders PK_Sales_Orders 1154103152 1 3966 2016-06-02 10:39:54.2533333 OrderID
Sales.Orders FK_Sales_Orders_CustomerID 1154103152 2 0 2018-07-25 10:52:18.0733333 CustomerID
Sales.Orders FK_Sales_Orders_SalespersonPersonID 1154103152 3 0 2018-07-25 10:52:18.4233333 SalespersonPersonID
Sales.Orders FK_Sales_Orders_PickedByPersonID 1154103152 4 4751 2016-06-02 10:43:25.6400000 PickedByPersonID
Sales.Orders FK_Sales_Orders_ContactPersonID 1154103152 5 2012 2016-06-02 10:45:05.4033333 ContactPersonID
Sales.Orders _WA_Sys_0000000E_44CA3770 1154103152 6 0 2018-07-25 10:53:22.7400000 PickingCompletedWhen
Sales.Orders _WA_Sys_0000000A_44CA3770 1154103152 7 3495 2016-06-02 10:41:14.3700000 IsUndersupplyBackordered

Step 3 : Do an update/delete in a transaction with rollback – for field PickingCompletedWhen
DECLARE @i_modification_counter_before int = 0
,@i_modification_counter_after int = 0;

SELECT
@i_modification_counter_before = sp.modification_counter
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND s.stats_id = 6
BEGIN TRAN
UPDATE TOP(1000) Sales.Orders
SET PickingCompletedWhen = SYSDATETIME()

SELECT
@i_modification_counter_after = sp.modification_counter
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND s.stats_id = 6

ROLLBACK TRAN
SELECT @i_modification_counter_before as beforeCounter
,@i_modification_counter_after as afterCounter
,@i_modification_counter_after-@i_modification_counter_before as differ

beforeCounter afterCounter differ
7480 8480 1000

 

Step 4 : utility table and populate with thresholds:

INSERT INTO db_workspace.dbo.utility_stats(uts_object_id,uts_stats_id,uts_threshold)
SELECT
s.object_id as objectID
,s.stats_id as statsID
,5000
FROM WideWorldImporters.sys.stats as s
WHERE s.object_id = OBJECT_ID(‘WideWorldImporters.Sales.Orders’);
GO

 

SELECT * FROM db_workspace.dbo.utility_stats
/*
uts_id uts_object_id uts_stats_id uts_threshold
1 1154103152 1 5000
2 1154103152 2 5000
3 1154103152 3 5000
4 1154103152 4 5000
5 1154103152 5 5000
6 1154103152 6 5000
7 1154103152 7 5000
*/

Step 5: Create dynamic string to run the update statistics , for the statistics with modification counter above the threshold.

 

DECLARE @nvc_SQLText AS nvarchar(4000) = N”;

SELECT
@nvc_SQLText = STRING_AGG(N’ UPDATE STATISTICS ‘
+ OBJECT_SCHEMA_NAME(s.object_id)
+ N’.’ + OBJECT_NAME(s.object_id)
+N'(‘ + s.name +N’)’
+ N’ WITH FULLSCAN ‘ + char(10) + char(13) ,N’;’)
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
LEFT JOIN db_workspace.dbo.utility_stats as us
ON us.uts_object_id = s.object_id
AND us.uts_stats_id = s.stats_id
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND sp.modification_counter >= us.uts_threshold

SELECT @nvc_SQLText

/* output :
UPDATE STATISTICS Sales.Orders(_WA_Sys_0000000E_44CA3770) WITH FULLSCAN
*/

EXEC sp_executeSQL @nvc_SQLText

GO

 

Step 6: Check the value after update statistics, for that particullary statistics:
_WA_Sys_0000000E_44CA3770

use WideWorldImporters
go

SELECT
OBJECT_SCHEMA_NAME(s.object_id)
+ ‘.’ + OBJECT_NAME(s.object_id) as objectName
, s.name as statisticsName
,s.object_id as objectID
,s.stats_id as statsID
,sp.modification_counter
,sp.last_updated
,sc.colNames
,us.uts_threshold
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
CROSS APPLY(SELECT STRING_AGG(c.name,’,’) as colNames
FROM sys.stats_columns as sc
INNER JOIN sys.columns as c
ON sc.column_id = c.column_id
and sc.object_id = c.object_id
WHERE sc.object_id = s.object_id
and sc.stats_id = s.stats_id
) as sc
LEFT JOIN db_workspace.dbo.utility_stats as us
ON us.uts_object_id = s.object_id
AND us.uts_stats_id = s.stats_id
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)

/*
objectName statisticsName objectID statsID modification_counter last_updated colNames uts_threshold
Sales.Orders _WA_Sys_0000000E_44CA3770 1154103152 6 0 2018-07-25 10:53:22.7400000 PickingCompletedWhen 5000
*/

That’s it fox!

 

S

 

Temp table cache 5

Conclusions:

  • DROP statement vs no DROP statement = no changes , all are cached
  • altering table vs creating DDL from start = here it’s a difference.  no cached / yes cached
  • adding index vs DDL statement with inline index – no cached / yes , cached.

 

For more info , visit  this and this.

 

S

 

Temp table cache 4

Adding index vs DDL statement with inline index

 

 

use WideWorldImporters
Go

CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_inlineIndex
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_inlineIndex
(InvoiceID int not null
,InvoiceMsg varchar(100) not null
,INDEX idx_nc_temp_InvoiceMsg_inline NONCLUSTERED(InvoiceMsg));

INSERT INTO #temp_inlineIndex(InvoiceID,InvoiceMsg)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
FROM Sales.Invoices;

DROP TABLE #temp_inlineIndex;
END
GO
CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_laterIndex
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_laterIndex
(InvoiceID int not null
,InvoiceMsg varchar(100) not null);

CREATE NONCLUSTERED INDEX idx_nc_temp_InvoiceMsg_after ON #temp_laterIndex
(InvoiceMsg)

INSERT INTO #temp_laterIndex(InvoiceID,InvoiceMsg)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
FROM Sales.Invoices;

DROP TABLE #temp_laterIndex;
END
GO

 

 

Test part :

 

DBCC FREEPROCCACHE;
GO

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

Exec dbo.temp_table_cache_test_inlineIndex;
GO 10

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO
/* now with out DROP */
DBCC FREEPROCCACHE;
GO

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

Exec dbo.temp_table_cache_test_laterIndex;
GO 10

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

 

output:

 

entries_count
0

entries_count
1

So, for inline index, the temp table is cached

 

entries_count
0

entries_count
0

 

No caching for CREATE InDEX, ALTER TABLE ..

 

S

Temp Table cache 3

Next step,  altering table vs creating DDL from start

 

use WideWorldImporters
Go

CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_cleanDDL
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_cleanDDL
(InvoiceID int not null
,InvoiceMsg varchar(100) not null
,someAdditionalField int null);

INSERT INTO #temp_cleanDDL(InvoiceID,InvoiceMsg,someAdditionalField)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
,InvoiceID + 10
FROM Sales.Invoices;

DROP TABLE #temp_cleanDDL;
END
GO
CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_alterTable
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_alterTable
(InvoiceID int not null
,InvoiceMsg varchar(100) not null);

ALTER TABLE #temp_alterTable
ADD someAdditionalField INT Null

INSERT INTO #temp_alterTable(InvoiceID,InvoiceMsg,someAdditionalField)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
,InvoiceID + 10
FROM Sales.Invoices;

DROP TABLE #temp_alterTable;
END
GO

 

Above, it’s the creation part.

Next, will be the test part:

 

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

Exec dbo.temp_table_cache_test_cleanDDL;
GO 10

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

 

output:

entries_count
0

entries_count
1

 

So, the clean DDL will be cached.

 
/* now with out DROP */
DBCC FREEPROCCACHE;
GO

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

Exec dbo.temp_table_cache_test_alterTable;
GO 10

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

output:

entries_count
0

entries_count
0

The alter part, will NOT be caching.

 

 

S

 

 

 

Temp table cache 2

First test  DROP TABLE statement vs no DROP TABLE statement.

 

I have two stored procedures :

use WideWorldImporters
Go

CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_WithDROP
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_WithDrop
(InvoiceID int not null
,InvoiceMsg varchar(100) not null);

INSERT INTO #temp_WithDrop(InvoiceID,InvoiceMsg)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
FROM Sales.Invoices;

/* express drop table statement*/
DROP TABLE #temp_WithDrop;
END
GO
CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_noDROP
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_noDrop
(InvoiceID int not null
,InvoiceMsg varchar(100) not null);

INSERT INTO #temp_noDrop(InvoiceID,InvoiceMsg)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
FROM Sales.Invoices;

–DROP TABLE #temp_noDrop;
END
GO

 

Now, let’s call them and see , with the help of a dmv sys.dm_os_memory_cache_counters

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;

 

 

DBCC FREEPROCCACHE;
GO

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;

Exec dbo.temp_table_cache_test_WithDROP;

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

 

output of it :

entries_count
0

entries_count
1

 
/* now with out DROP */
DBCC FREEPROCCACHE;
GO

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;

Exec dbo.temp_table_cache_test_noDROP;

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

 

the same output:

entries_count
0

entries_count
1

 

So, DROP TABLE or not , will not prevent caching.

 

 

S

Temp Table cache

When you write code that it’s using temp table, the key point here is the ability to cache the object.

When an object(temp table) it’s cached , then DROP TABLE statement, behind , will rename the table and next CREATE TABLE will bring back the right name.

There are some points that will inhibit , the caching:

  • a named constraints
  • a DDL statement after the initial create statement
  • have with Recompile

 

Some myths about caching:

  • DROP TABLE  will not prevent
  • UPDATE STATISTICS will not prevent
  • TRUNCATE TABLE also , will not prevent

But – manual creation of statistics will do prevent

See more here

 

Now let’s test:

What should we test ?

  • DROP TABLE statement vs no DROP TABLE statement
  • altering table vs creating DDL from start
  • adding index vs DDL statement with inline index

 

S

 

 

 

Job interview

I was in talks with a company, for a sql dev position.

I had 2 sessions, and should have been another session these days . The manager was in the site on Tuesday and Wednesday. I’ve been scheduled on Wednesday. And on Tuesday afternoon , close to evening, I got a phone that  I don’t need to go because they find a person that is  good enough for the position and the manager will stop the interview. Ok . Next time, I should schedule in the first part of the period , so that I be there in the firsts ones.

At least, I’m glad that they told me, and not wast time on traveling.

So , back to board , for a new search.

The sudoku task was for this interview. 😀

 

 

S

Sudoku 4

I started to write/think at this solution, when I received as been part of a job interview task.  Yesterday, I founded that the sql server part is like 30% of the work/time; the rest of 70% it’s a “click” development , a GUI solution from where you should configure the needs base on the requirement.

I don’t know if this position is for me … to little sql server and probably will be at level “select * from” ; something between junior and intermediary level. Ok, I’m not an expert, but I would like to became or be close to this level. And in this way , probably I will not be able.

The plus part it’s  the part that they use latest technologies – sql server 2017 . (The ones that I talk , are using 2012 …)

 

The sudoku part 4: find , between possible values, the value  that appear only one time – in row, in column , in box.

ex: cell(1,1) =1245678  ;  cell(1,4) = 1479 ; cell(1,7) = 2579

So, value 8 it’s only one time – then this is the one for cell(1,1) = 1;

Set it in the right cell and for the holl row/column/box = remove it, with a replace empty string = replace(value_temp,value1,”)

 

S

 

 

 

Insert

I’ve wrote about parallel insert in a older post. This is an update to it.

Starting with sql server 2014, the engine allowed parallelism of the SELECT …INTO newTable operations.

Starting with sql server 2016, the engine allowed parallelism of the INSERT … SELECT operations, but in some conditions :

  • use of TABLOCK : INSERT table (TABLOCK) …SELECT
  • compatibility level of db  = >= 130.

 

But , there some conditions that disable parallelism :

  • presence of a clustered index or any additional non-clustered indexes
  • IDENTITY column

See more here

 

 

S