Bad page split

How to find and monitor it.

Page splits is something common. When a page is full and want to insert/update and there is no space, then a new page is allocated (split) .

Some splits are good – the ones that are in inserting  in a sequential way, like identity. Here is allocated a new page at the end of the list. Because of this, it’s a good split

Some are not good – like mid insert or an update to full page. For this , we need to add/update the record in a mid list,  but no available space. For this, we allocate a new page at the end of the list and move to this page, half of records.Because of this , the hierarchy it’s in a disorder manner and voila = fragmentation.

When a page it’s split , in transaction log, a information it’s inserted : LOP_DELETE_SPLIT

To monitor this, you can search/read the transaction log with fn_dblog

A better way is to use extended events and and look for the event sql_server.transaction_log and filter for an operation LOP_DELETE_SPLIT , which is value =11

CREATE EVENT SESSION [BadBadSplits]
ON SERVER
ADD EVENT sqlserver.transaction_log(
WHERE operation = 11 — LOP_DELETE_SPLIT
)
ADD TARGET package0.event_file
(SET filename=N’D:\xe\badbadMyDog.xel’)
GO

Start the extended event:

ALTER EVENT SESSION [BadBadSplits]
ON SERVER
STATE = start;
GO

S

File Grow

It’s important to monitor the file grow. Due to database activity, your files will accumulate records and this will trigger a grow. Now disk space is not expensive , never the less, you need to know when you are close to the limits of your disk space.

For this , you can use sys.master_files and/or sys.database_files (this is for the current database).

You need some additional objects :

  • a table – holding the size of each file , per day – I call it dbo.FileSize
  • a stored procedure – to insert information into this utility table- I call it = dbo.fileSize_add
  • additional a job , running once a day, to call the previous stored procedure, this is doing the actual monitoring
  • a script that make the conclusion: like what will be the size in 30 days ?

 

DROP TABLE IF EXISTS dbo.FileSize;
GO

CREATE TABLE dbo.FileSize
(
DBName VarChar(128)
, FileLogicalName VarChar(128)
, FilePhysicalName VarChar(512)
, Size_MB DECIMAL(15,2)
, InsertedDate DATE
, CONSTRAINT pk_FileSize__DBName_FileLogicalName
PRIMARY KEY CLUSTERED (DBName,FileLogicalName,InsertedDate)
);
GO
CREATE OR ALTER PROCEDURE dbo.fileSize_add
(
@i_InsertedDate DATE
)
AS
BEGIN

INSERT INTO dbo.FileSize( DBName, FileLogicalName, FilePhysicalName, Size_MB, InsertedDate)
SELECT
DB_NAME(mf.database_id) as dbName
,mf.name as fileLogicalName
,mf.physical_name as physicalName
,(mf.size * 8 )/1024.0 as fileSize_MB /* why * 8 ? because 8kb is for a page */
,@i_InsertedDate as InsertedDate
FROM sys.master_files as mf
CROSS APPLY sys.dm_os_volume_stats(mf.database_id,mf.file_id) as vs
WHERE
mf.database_id IN( DB_ID(‘db_workspace’) , DB_ID(‘WideWorldImporters’));
END
GO

 

In a next version, you can also monitor the  size of your volume :

SELECT DISTINCT
vs.volume_mount_point as volumeName
,CONVERT(INT, vs.available_bytes / 1048576.0) as freeSpace_MB
,CONVERT(INT, vs.total_bytes / 1048576.0) as TotalSize_MB
FROM sys.master_files as mf
CROSS APPLY sys.dm_os_volume_stats(mf.database_id,mf.file_id) as vs
ORDER BY volumeName

 

Next, let’s start the monitoring by calling the sp and see what it’s in the target table:

EXEC dbo.FileSize_add @i_InsertedDate = ‘20180816’;

SELECT * FROM dbo.FileSize

DBName FileLogicalName FilePhysicalName Size_MB InsertedDate
db_workspace db_workspace D:\SQL\Databases\DATA\db_workspace.mdf 8.00 2018-08-16
db_workspace db_workspace_log D:\SQL\Databases\DATA\db_workspace_log.ldf 24.00 2018-08-16
WideWorldImporters WWI_InMemory_Data_1 D:\SQL\Databases\DATA\WWI\WideWorldImporters_InMemory_Data_1 0.00 2018-08-16
WideWorldImporters WWI_Log D:\SQL\Databases\DATA\WWI\WideWorldImporters.ldf 100.00 2018-08-16
WideWorldImporters WWI_Primary D:\SQL\Databases\DATA\WWI\WideWorldImporters.mdf 1024.00 2018-08-16
WideWorldImporters WWI_UserData D:\SQL\Databases\DATA\WWI\WideWorldImporters_UserData.ndf 2048.00 2018-08-16

 

step2: Simulate a database activity:

use db_workspace
go

drop table if exists dbo.test;
create table dbo.test
( idTest int identity(1,1) not null
,cTest char(2000) not null
)
insert into dbo.Test(ctest)
select top (100000) s.name
from sys.all_columns as c
cross join sys.all_columns as s
/*(100000 rows affected)*/

 

Step3 : next day, monitor again the size:

EXEC dbo.FileSize_add @i_InsertedDate = ‘20180817’;

SELECT DbName,FileLogicalName,Size_MB,InsertedDate FROM dbo.FileSize WHERE InsertedDate = ‘20180817’

 

DbName FileLogicalName Size_MB InsertedDate
db_workspace db_workspace 200.00 2018-08-17
db_workspace db_workspace_log 344.00 2018-08-17
WideWorldImporters WWI_InMemory_Data_1 0.00 2018-08-17
WideWorldImporters WWI_Log 100.00 2018-08-17
WideWorldImporters WWI_Primary 1024.00 2018-08-17
WideWorldImporters WWI_UserData 2048.00 2018-08-17

 

Step 4: additional activity:

use db_workspace
go

drop table if exists dbo.test2;
create table dbo.test2
( idTest int identity(1,1) not null
,cTest char(2000) not null
)
insert into dbo.Test2(ctest)
select top (1000) s.name
from sys.all_columns as c
cross join sys.all_columns as s
/*(1000 rows affected)*/

 

Step5 : next day, monitor again the size:

EXEC dbo.FileSize_add @i_InsertedDate = ‘20180817’;

SELECT DbName,FileLogicalName,Size_MB,InsertedDate FROM dbo.FileSize WHERE InsertedDate = ‘20180817’

 

DbName FileLogicalName Size_MB InsertedDate
db_workspace db_workspace 264.00 2018-08-18
db_workspace db_workspace_log 344.00 2018-08-18
WideWorldImporters WWI_InMemory_Data_1 0.00 2018-08-18
WideWorldImporters WWI_Log 100.00 2018-08-18
WideWorldImporters WWI_Primary 1024.00 2018-08-18
WideWorldImporters WWI_UserData 2048.00 2018-08-18

 

Step6: Conclusions:

  • for the files, different that the log , we need to calculate the the grow from one day to another.
  • Base on this will calculate the average grow, the minim grow, the maxim grow
  • See in future, let’s say , in 30 days what will the size , base on the average or the maxim grow.
  • For the log files, = ldf= they depends on the type of recovery model; because they will be trim by check point or log backup. Here it is interesting the maximum size in a day.

 

;WITH Src AS
(
SELECT FileLogicalName
,Size_MB as CurrentSize
,LAG(Size_MB,1,0) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate) AS PrevSize
,LAG(InsertedDate,1,NULL) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate) AS PrevDate

,Size_MB – LAG(Size_MB,1,0) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate) AS DeltaSize
,DATEDIFF(DAY,LAG(InsertedDate,1,NULL) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate),InsertedDate) AS DeltaDays
,DBName
,InsertedDate
,FIRST_VALUE(Size_MB) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate DESC) AS LastValue_Size
,FIRST_VALUE(InsertedDate) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate DESC) AS LastValue_Date
FROM dbo.FileSize as FS
WHERE
RIGHT(FilePhysicalName,3) <> ‘ldf’
AND FileLogicalName = ‘db_workspace’
)
SELECT FileLogicalName
,MAX(LastValue_Size) AS CurrentSize
,MAX(LastValue_Date) AS LastReportedDate
,CAST(SYSDATETIME() AS DATE) AS CurrentDate
,AVG(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS AVG_DeltaSize
,MIN(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS minDeltaPerDay
,MAX(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS maxDeltaPerDay
,MAX(LastValue_Size) + 30* MAX(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS Size30Days_Max
,MAX(LastValue_Size) + 30* AVG(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS Size30Days_Avg
FROM Src
GROUP BY
FileLogicalName

 

 

FileLogicalName CurrentSize LastReportedDate CurrentDate AVG_DeltaSize minDeltaPerDay maxDeltaPerDay Size30Days_Max Size30Days_Avg
db_workspace 264.00 2018-08-18 2018-08-18 128.00 64.00 192.00 6024.00 4104.00

 

So , in 30 days , keeping the maxim grow in a day, will have size = 6024.00, or by calculating with average grow, the size will be = 4104.00.

 

And now, you can collaborate with the available space in the volume … and trigger some response, some mail…

Thanks,

 

S

#t-sql #fileGrow #sqlserver #mssqlserver #sql #databaseGrow

Statistics update 4 – cached plans

Previous  post was about  statistics update and cached plans. Moving next, what happens when we have modification

Q: – It will  trigger the invalidation of existing execution plan, when there is a modification or what happens when we update the statistics (that have modifications) ?

A: – Let see!

The flow :

  1. Clear the cache – drop freeProcCach
  2. Check the cached plans sys.dm_exec_cached_plans and Recompilations/sec
  3. Run a sp =Website.SearchForCustomers
  4. Check the cached_plan and Recompilations/sec
  5. Do some modification: insert/update/delete
  6. Run the update statistics statement
  7. Check the cached_plan and Recompilations/sec
  8. Run sp =Website.SearchForCustomers
  9. Check the cached_plan and Recompilations/sec

Step1,2:

DBCC FreeProcCache;
go

SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(0 rows affected)

SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’

SQL Re-Compilations/sec 23

 

Step3:

EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3

JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}

 

Step4: Check the cached_plan and Re-compilations counter

SELECT cp.usecounts
,cp.plan_handle
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(1 row affected)

usecounts plan_handle
1 0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000

SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’

SQL Re-Compilations/sec 23

 

Step5: do modification

BEGIN TRAN
UPDATE TOP (5000) sales.Customers
SET CustomerName = CustomerName + N’sbse’
ROLLBACK TRAN
(663 rows affected)

Step6: run update statistics

UPDATE STATISTICS Sales.Customers WITH FULLSCAN;

Commands completed successfully.

 

Step7: check cached_plan and Recompilations counter

SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(1 row affected)

plan_handle
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000

SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’

SQL Re-Compilations/sec 23

No changes after the update statistics statement in Re-compilations counter.

 

 

Step8: Run , again the sp = Website.SearchForCustomers

EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3
(1 row affected)

JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}

Step9: check cached_plan and Re-compilations counter

SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(1 row affected)

bucketid refcounts usecounts size_in_bytes memory_object_address cacheobjtype objtype plan_handle pool_id parent_plan_handle
30650 2 2 303104 0x000001B572C94060 Compiled Plan Proc 0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000 2 NULL
you see the plan handle its:  0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000 – previous plan handle

and see the use counts: 2…

usecounts refcounts
2 2

SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’

SQL Re-Compilations/sec 24

 

Conclusion:

When the update statistics statement it’s trigger, and have modification in statistics object, the plan it’s not evicted from cache. Can be reused.

The changes are in Re-compilations counter.And this counter is increased NOT after the update statistics statement, but AFTER the execution of the stored procedure/query.

 

S

Statistics update 3 – cached plan

What happens when you  launch an update statistics for a table?

Q: – It will  trigger a recompilation of the sp and then the invalidation of existing execution plan ?

A: – It depends! A plan is evicted from cache when :

  • FreeProcCache or
  • sp_Recompile
  • FlushProcInDB
  • changing db compatibility mode
  • change sp_Configure server option
  • etc…

 

When there is no changes, the flow will be:

update statistics —> stop.

When there is changes in statistics :

update statistics —>will trigger a —->plan invalidated —-> recompilation.

No cached plan is marked for a recompilation or marked as invalid. You can use extended events : sqlserver.sql_statement_recompile or sys.dm_os_performance_counters – >SQL Re-Compliations/sec to see what plan is invalidated and triggered for recompilation.

  1. Clear the cache – drop freeProcCach
  2. Check the cached plans sys.dm_exec_cached_plans
  3. Run a sp =Website.SearchForCustomers
  4. Check the cached_plan
  5. Run the update statistics statement
  6. Check the cached_plan
  7. Run , again the sp = Website.SearchForCustomers
  8. Check the cached_plan , to see the plan handle.
  9. Check recompilation with sys.dm_os_performance_counters

As you can see, the stored procedure it’s only a select sp. So insert/update/delete. So , the table it’s not modified, and there for, the statistics objects are not modified. Key point here : statistics objects are not modified. So no need of an recompilation.

Step1,2:

DBCC FreeProcCache;
go

SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(0 rows affected)

SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
or pc.counter_name = ‘SQL Compilations/sec’
/*SQL Re-Compilations/sec 23 */

In my case Re-Compilations/sec = 23

 

Step3:

EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3

JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}

Step4: Check the cached_plan

SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(1 row affected)

plan_handle
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000

Step5: run update statistics

UPDATE STATISTICS Sales.Customers WITH FULLSCAN;

Commands completed successfully.

 

Step6: check cached_plan

SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(1 row affected)

plan_handle
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000

SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
or pc.counter_name = ‘SQL Compilations/sec’
/*SQL Re-Compilations/sec 23 */

 

 

The plan is there.

Step7: Run , again the sp = Website.SearchForCustomers

EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3
(1 row affected)

JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}

Step8: check cached_plan

SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)

(1 row affected)

bucketid refcounts usecounts size_in_bytes memory_object_address cacheobjtype objtype plan_handle pool_id parent_plan_handle
30650 2 2 303104 0x000001B572C94060 Compiled Plan Proc 0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000 2 NULL
you see the plan handle its:  0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000 – previous plan handle

and see the use counts: 2…

usecounts refcounts
2 2

Because no modification on statistics, the plan is not invalidated.

Step9:

SELECT *
  FROM sys.dm_os_performance_counters as pc
  WHERE counter_name =‘SQL Re-Compilations/sec’
/*SQL Re-Compilations/sec 23 */

Next post, when we have modifications , what happens with recompilation and plan cached ?

 

#tsql #t-sql #sqlserver #msSqlServer #Recompilation #statistics #update

S

Statistics update 2

Previous post about statistics was talking about updating them , base on some value, value that exists in a utility table.

The present post is referring also about update statistics, but now base on a percentage value, threshold . Maybe this is more flexible . This value could also be present in a utility table , it’s up to anyone how to have it.

In my case, I use a value of 5.0, so that I will not update all objects, only some of them. Like I said, this value, could be extracted from a utility table , and could be unique for an particularly statistics object (stats_id) , or an object( object_id ) or database…

 

The starting point is the calculation of the percentage value :  sp.modification_counter/sp.rows

where :

  •  sp.modification_count = actual value of modification
  • sp.rows = number of rows in that object

The actual formula is :

CAST((sp.modification_counter/(sp.rows * 1.0)) *100.0  AS decimal(6,4))

 

This query will list all the statistics of the object

SELECT
OBJECT_SCHEMA_NAME(s.object_id)
+ ‘.’ + OBJECT_NAME(s.object_id) as objectName
, s.name as statisticsName
,s.object_id as objectID
,s.stats_id as statsID
,sp.modification_counter
,sp.rows as noRows
,CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) as modification_Percent
,sp.last_updated
,sc.colNames
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
CROSS APPLY(SELECT STRING_AGG(c.name,’,’) as colNames
FROM sys.stats_columns as sc
INNER JOIN sys.columns as c
ON sc.column_id = c.column_id
and sc.object_id = c.object_id
WHERE sc.object_id = s.object_id
and sc.stats_id = s.stats_id
) as sc
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)

 

objectName statisticsName objectID statsID modification_counter noRows modification_Percent last_updated colNames
Sales.Orders PK_Sales_Orders 1154103152 1 3966 69629 5.6959 2016-06-02 10:39:54.2533333 OrderID
Sales.Orders FK_Sales_Orders_CustomerID 1154103152 2 0 73595 0.0000 2018-07-25 10:52:18.0733333 CustomerID
Sales.Orders FK_Sales_Orders_SalespersonPersonID 1154103152 3 0 73595 0.0000 2018-07-25 10:52:18.4233333 SalespersonPersonID
Sales.Orders FK_Sales_Orders_PickedByPersonID 1154103152 4 4751 71018 6.6899 2016-06-02 10:43:25.6400000 PickedByPersonID
Sales.Orders FK_Sales_Orders_ContactPersonID 1154103152 5 2012 71583 2.8107 2016-06-02 10:45:05.4033333 ContactPersonID
Sales.Orders _WA_Sys_0000000E_44CA3770 1154103152 6 0 73595 0.0000 2018-07-25 10:53:22.7400000 PickingCompletedWhen
Sales.Orders _WA_Sys_0000000A_44CA3770 1154103152 7 3495 70100 4.9857 2016-06-02 10:41:14.3700000 IsUndersupplyBackordered

Adding the  additional predicate on the percentage …

WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) > CAST(5.0 AS decimal(6,4))

will have this records outputted :

objectName statisticsName modification_counter noRows modification_Percent
Sales.Orders PK_Sales_Orders 3966 69629 5.6959
Sales.Orders FK_Sales_Orders_PickedByPersonID 4751 71018 6.6899

Now, just prepare the dynamic sql for update

 
DECLARE @nvc_SQLText AS nvarchar(4000) = N”;

SELECT
@nvc_SQLText = STRING_AGG(N’UPDATE STATISTICS ‘
+ OBJECT_SCHEMA_NAME(s.object_id)
+ N’.’ + OBJECT_NAME(s.object_id)
+N'(‘ + s.name +N’)’
+ N’ WITH FULLSCAN;’ + char(10) ,N”)
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) > CAST(5.0 AS decimal(6,4))

SELECT @nvc_SQLText

 

output:

UPDATE STATISTICS Sales.Orders(PK_Sales_Orders) WITH FULLSCAN;
UPDATE STATISTICS Sales.Orders(FK_Sales_Orders_PickedByPersonID) WITH FULLSCAN;

 

and run it :

EXEC sp_executeSQL @nvc_SQLText;

 

 

Now, check if you have some objects over the threshold :

 
SELECT
OBJECT_SCHEMA_NAME(s.object_id)
+ ‘.’ + OBJECT_NAME(s.object_id) as objectName
,s.name as statisticsName
,sp.modification_counter
,sp.rows as noRows ,CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4)) as modification_Percent
FROM sys.stats as s
CROSS APPLY sys.dm_db_stats_properties(s.object_id,s.stats_id) as sp
CROSS APPLY(SELECT STRING_AGG(c.name,’,’) as colNames
FROM sys.stats_columns as sc
INNER JOIN sys.columns as c
ON sc.column_id = c.column_id
and sc.object_id = c.object_id
WHERE sc.object_id = s.object_id
and sc.stats_id = s.stats_id
) as sc

WHERE s.object_id = OBJECT_ID(‘Sales.Orders’)
AND CAST((sp.modification_counter/(sp.rows *1.0)) *100.0 AS decimal(6,4))
> CAST(5.0 AS decimal(6,4))

 

output :

No rows.

#t-sql #sqlserver #statistics #update

 

 

S