What configuration I like to change/to set, in SSMS :

S
t-sql
What configuration I like to change/to set, in SSMS :

S
You recognized this message ?
Now find the problem/column/value where this error was raised. It’s like search the needle in the haystack…
Now, new in sql server 2019 , this message will be replace ( hip hip hurrah!) with a new message:
Will be available, also, in :
but not as default. (To activate it, in this versions, you need to use trace flag 460).
For the moment, in sql server 2019, you still need TF 460 to activate, but in the next release of sql server 2019, it will be activated as default.
Enough with message ID 8152 , the new message ID is 2628.
I’m very happy on this new feature.
S
Recently , I started to be involve in a Dynamic AX project.
For the moment, I discovered the following:
– Dynamic AX is not using the full power of sql server;
– sql server it is used only for storing; for storing , you can use a
spreed sheet from excel…
– is not a OLTP , more like a OLAP
– normalization ? what is this ?
– Reference integrity ? Trusted relations ? what are these ?
Before this, it was an internally developed ERP , that was doing the job. The final user was a little grumble, because they always throw stuff on developer , because they were inside company. After the new implementation, nothing was working, even now, after 3 years, are problems, speed is only slow and super slow. Now they say : “what good was previous ERP…”
I don’t know on what , management made the deci$$$ion to implement that solution, but definitely they took a wrong decision and they don’t have the power to admit …
S
If you have a fill factor with 70 , what will be the effect , good or bad.
For this I will have :
– table with fill factor 70 – Table70
– table with fill factor 100 = Table100
Each table will have a cluster index on an identity column.
Then Inserting records so that we have like 1000 pages:
4 records per page * 1000 pages = 4000 records needed to be inserted.
Check pages for both tables.
Do a rebuild with the right fill factor.
Check the level of fragmentation.
use db_workspace
goDROP TABLE dbo.Table70;
CREATE TABLE dbo.Table70
(
idTable INT IDENTITY(1,1) NOT NULL
,colText CHAR(2000) NOT NULL DEFAULT ‘ 70 table some text 2000 char’
,CONSTRAINT PK_Table70_idTable PRIMARY KEY CLUSTERED(idTable)
);DROP TABLE dbo.Table100;
CREATE TABLE dbo.Table100
(
idTable INT IDENTITY(1,1) NOT NULL
,colText CHAR(2000) NOT NULL DEFAULT ‘ 100 table some text 2000 char’
,CONSTRAINT PK_Table100_idTable PRIMARY KEY CLUSTERED(idTable)
);INSERT INTO dbo.Table70(colText)
DEFAULT VALUES
GO 4000INSERT INTO dbo.Table100(colText)
DEFAULT VALUES
GO 4000
Check the number of pages:
SELECT COUNT(DISTINCT CA.page_id) as distinct_count
FROM dbo.Table70
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%)AS CA
–1000 distinct pagesSELECT COUNT(DISTINCT CA.page_id) as distinct_count
FROM dbo.Table100
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%)AS CA
–1000 distinct pages
Do rebuild with the fill factor :
ALTER INDEX PK_Table70_idTable ON dbo.Table70
REBUILD WITH (FILLFACTOR = 70);ALTER INDEX PK_Table100_idTable ON dbo.Table100
REBUILD WITH (FILLFACTOR = 100);
Check pages:
SELECT COUNT(DISTINCT CA.page_id) as distinct_count
FROM dbo.Table70
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%)AS CA
–1334 distinct pagesSELECT COUNT(DISTINCT CA.page_id) as distinct_count
FROM dbo.Table100
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%)AS CA
–1000 distinct pages
So , we have more pages, in Table70, because of the fill factor = 70. This will trigger more logical reads, more memory …
A plus , in a lower fill factor then 100, and depending of your situation/table/activity . is that it will not generate an expensive page split, in case of a update/insert.
In this case, new records are always added to the end of table , like an append. For update , also it’s not a problem because, you have CHAR data type, and it’s fully allocated.
So , in such a case, Table100 it’s a better option.
In other cases, analyze it, check the cluster index, what kind of DML activity , monitor page splits and finally hire a good DBA.
S
If you are sending to server requests under the form of a query( not call to a stored procedure) , this is call ad hoc queries, and you are doing a ad hoc workload.
It is important to keep this under control , so that your plan cache will not grow under control. So, you have to check to plan cache , daily, and maybe hourly, depending on your system.
For this, you need some objects:
This is keeping the list of counters:
IF OBJECT_ID(‘dbo._Counters’,’U’) IS NULL
BEGIN
CREATE TABLE dbo._Counters
(
cnt_ID INT NOT NULL IDENTITY(1,1)
,cnt_Name VARCHAR(50) NOT NULL
,cnt_Description VARCHAR(200) NOT NULL
,CONSTRAINT pk__Counters_cntID PRIMARY KEY CLUSTERED(cnt_ID)
);
END
Next two tables, are keeping the raw data.
IF OBJECT_ID(‘dbo.Monitoring_Header’,’U’) IS NULL
BEGIN
CREATE TABLE dbo.Monitoring_Header
(
mnh_ID INT NOT NULL IDENTITY(1,1)
,mnh_Date DATE NOT NULL
,mnh_InsertedDate DATETIME2(0) NOT NULL DEFAULT SYSDATETIME()
,CONSTRAINT pk_Monitoring_Header_mnhID PRIMARY KEY CLUSTERED(mnh_ID)
–,CONSTRAINT df_Monitoring_Header_InsertedDate DEFAULT GETDATE() FOR mnh_InsertedDate
);
ENDDROP TABLE IF EXISTS dbo.Monitoring_Details;
IF OBJECT_ID(‘dbo.Monitoring_Details’,’U’) IS NULL
BEGIN
CREATE TABLE dbo.Monitoring_Details
(
mnd_ID INT NOT NULL IDENTITY(1,1)
,mnd_mnh_ID INT NOT NULL
,mnd_cnt_ID INT NOT NULL
,mnd_dbID SMALLINT NOT NULL
,mnd_Value DECIMAL(16,2) NOT NULL
,CONSTRAINT pk_Monitoring_Details_mndID_cntID_dbName PRIMARY KEY NONCLUSTERED(mnd_ID)
,CONSTRAINT fk_Monitoring_Details_Header_mnh_ID
FOREIGN KEY ( mnd_mnh_id)
REFERENCES dbo.Monitoring_Header
,CONSTRAINT fk_Monitoring_Details__Counters_cntID
FOREIGN KEY (mnd_cnt_id)
REFERENCES dbo._Counters
,CONSTRAINT UQ_C_Monitoring_Details_mnhID_cntID_dbID UNIQUE CLUSTERED
(mnd_mnh_id
,mnd_cnt_id
,mnd_dbID)
);
/*CREATE UNIQUE CLUSTERED INDEX idx_Monitoring_Details_mnhID_cntID_dbName ON dbo.Monitoring_Details
(mnd_mnh_id
,mnd_cnt_id
,mnd_dbName)
*/
END
Next 2 tables, are some aggregation, for better understand the trends.
One is aggregation by DAY.
IF OBJECT_ID(‘dbo.Monitoring_AGG_ByDay’,’U’) IS NULL
BEGIN
CREATE TABLE dbo.Monitoring_AGG_ByDay
(
mad_ID INT NOT NULL IDENTITY(1,1)
,mad_Date DATE NOT NULL
,mad_cnt_ID INT NOT NULL
,mad_dbID SMALLINT NOT NULL
,mad_MinValue DECIMAL(16,2) NOT NULL
,mad_MaxValue DECIMAL(16,2) NOT NULL
,mad_AvgValue DECIMAL(16,2) NOT NULL
,CONSTRAINT pk_Monitoring_AGG_ByDay_madID PRIMARY KEY NONCLUSTERED(mad_ID)
,CONSTRAINT fk_Monitoring_AGG_ByDay__Counters_cntID
FOREIGN KEY (mad_cnt_ID)
REFERENCES dbo._Counters
,CONSTRAINT UQ_C_Monitoring_AGG_ByDay_mnhID_cntID_dbID UNIQUE CLUSTERED
(mad_Date
,mad_cnt_ID
,mad_dbID)
);
END
And this by Month
IF OBJECT_ID(‘dbo.Monitoring_AGG_ByMonth’,’U’) IS NULL
BEGIN
CREATE TABLE dbo.Monitoring_AGG_ByMonth
(
mam_ID INT NOT NULL IDENTITY(1,1)
,mam_Date DATE NOT NULL
,mam_cnt_ID INT NOT NULL
,mam_dbID SMALLINT NOT NULL
,mam_MinValue DECIMAL(16,2) NOT NULL
,mam_MaxValue DECIMAL(16,2) NOT NULL
,mam_AvgValue DECIMAL(16,2) NOT NULL
,CONSTRAINT pk_Monitoring_AGG_ByMonth_mamID PRIMARY KEY NONCLUSTERED(mam_ID)
,CONSTRAINT fk_Monitoring_AGG_ByMonth__Counters_cntID
FOREIGN KEY (mam_cnt_ID)
REFERENCES dbo._Counters
,CONSTRAINT UQ_C_Monitoring_AGG_ByMonth_mnhID_cntID_dbID UNIQUE CLUSTERED
(mam_Date
,mam_cnt_ID
,mam_dbID)
);
END
INSERT INTO dbo._Counters(cntName,cntDescription)
VALUES(‘adHoc cached plan size’,’ad Hoc size in cached plan’);
For this, you need some stored procedure, that are good to be place inside some jobs.
One stored procedure is populating the raw tables Header and Details:
CREATE OR ALTER PROCEDURE dbo.Monitoring_adHoc
AS
BEGIN
SET NOCOUNT ON;
DECLARE @d_date AS DATE = CAST(SYSDATETIME() AS DATE)
,@i_mnhID INT = 0
,@i_cntID INT = 1; — tbl dbo._Counters – adHoc size
BEGIN TRY–BEGIN TRANSACTION
INSERT INTO dbo.Monitoring_Header(mnh_Date)
VALUE(@d_date)
SELECT @i_mnhID = SCOPE_IDENTITY();INSERT INTO dbo.Monitoring_Details(mnd_mnh_ID,mnd_cnt_ID,mnd_dbID,mnd_Value)
SELECT @i_mnhID, @i_cntID, dbid, SUM(CAST(size_in_bytes AS bigint))/1024/1024
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st
WHERE
cp.cacheobjtype = N’Compiled Plan’
AND cp.objtype IN(N’Adhoc’, N’Prepared’)
AND cp.usecounts = 1
GROUP BY dbid
OPTION (RECOMPILE);
–COMMIT TRANSACTION;
END TRY
BEGIN CATCHROLLBACK TRAN;
/* you can log to logTable…*/
END CATCH
END
Stored procedure , for doing Aggregation by day and populate the right table:
/* grouping informations from Monitoring and inserting into the AGG_ByDay table for previous day*/
CREATE OR ALTER PROCEDURE dbo.Monitoring_AGG_ByDay
AS
BEGIN
INSERT INTO dbo.Monitoring_AGG_ByDay(mad_Date,mad_cnt_ID,mad_dbID,mad_MinValue,mad_MaxValue,mad_AvgValue)
SELECT mh.mnh_Date
,md.mnd_cnt_ID
,md.mnd_dbID
,MIN(mnd_Value),MAX(mnd_Value),AVG(mnd_Value)
FROM
dbo.Monitoring_Header AS mh
INNER JOIN dbo.Monitoring_Details AS md
ON mh.mnh_id = md.mnd_mnh_id
WHERE mh.mnh_Date >= CAST(DATEADD(dAY,-1,GETDATE()) AS DATE)
AND mh.mnh_Date < CAST(GETDATE()AS DATE)
GROUP BY mh.mnh_Date,md.mnd_cnt_ID,md.mnd_dbID
END
Stored procedure for agg by month
CREATE OR ALTER PROCEDURE dbo.Monitoring_AGG_ByMonth
AS
BEGIN
INSERT INTO dbo.Monitoring_AGG_ByDay(mad_Date,mad_cnt_ID,mad_dbID,mad_MinValue,mad_MaxValue,mad_AvgValue)
SELECT CAST(DATEADD(MONTH,DATEDIFF(MONTH,0,mh.mnh_Date)-1,0) AS DATE)
,md.mnd_cnt_ID
,md.mnd_dbID
,MIN(mnd_Value),MAX(mnd_Value),AVG(mnd_Value)
FROM
dbo.Monitoring_Header AS mh
INNER JOIN dbo.Monitoring_Details AS md
ON mh.mnh_id = md.mnd_mnh_id
WHERE mh.mnh_Date >= CAST(DATEADD(MONTH,DATEDIFF(MONTH,0,GETDATE())-1,0) AS DATE) –previous month
AND mh.mnh_Date < CAST(DATEADD(MONTH,DATEDIFF(MONTH,0,GETDATE()),0) AS DATE) — current month
GROUP BY mh.mnh_Date,md.mnd_cnt_ID,md.mnd_dbID
END
And also, for old records in the raw table , maybe need to delete/purge from Details and Header.
CREATE OR ALTER PROCEDURE dbo.Monitoring_Purge_old_records
AS
BEGIN
DECLARE @i_rc AS INT = 0
,@i_j AS INT = 0
BEGIN TRY
WHILE (1=1)
BEGIN
BEGIN TRANSACTION;
DELETE TOP(100) md
FROM
dbo.Monitoring_Details as md
INNER JOIN dbo.Monitoring_Header as mh
ON md.mnd_mnh_ID = mh.mnh_ID
WHERE
mh.mnh_Date < CAST(DATEADD(DAY,-92,GETDATE()) AS DATE);SET @i_rc = @@ROWCOUNT;
IF @i_rc < 100 BREAK;SET @i_j = @i_j + 1;
IF @i_j = 10 /* less log activity*/
BEGIN
SET @i_j = 0;
COMMIT TRANSACTION;
ENDEND /* from while*/
IF @@TRANCOUNT > 0
COMMIT TRANSACTION; /* expected @@tranCount to be 1 ; so commit and close*/
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION; /* in this step @@tranCount should be 0 , in other way , rollback it*/DELETE mh
FROM dbo.Monitoring_Header AS mh
WHERE mh.mnh_Date < CAST(DATEADD(DAY,-92,GETDATE()) AS DATE);
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
END CATCH
END
Now , place this stored procedure , each one, in a right job , and start to monitor it.
S
New , in sql server 2017, with CU10, we have new hints related to CE&Compatibility level:
USE HINT(QUERY_OPTIMIZER_COMPATIBILITY_LEVEL_XXX)
where XXX can be :
SELECT TOP(10) *
FROM TableAny
OPTION(USE HINT(QUERY_OPTIMIZER_COMPATIBILITY_LEVEL_120));
For more info, please, read here.
S
How to find and monitor it.
Page splits is something common. When a page is full and want to insert/update and there is no space, then a new page is allocated (split) .
Some splits are good – the ones that are in inserting in a sequential way, like identity. Here is allocated a new page at the end of the list. Because of this, it’s a good split
Some are not good – like mid insert or an update to full page. For this , we need to add/update the record in a mid list, but no available space. For this, we allocate a new page at the end of the list and move to this page, half of records.Because of this , the hierarchy it’s in a disorder manner and voila = fragmentation.
When a page it’s split , in transaction log, a information it’s inserted : LOP_DELETE_SPLIT
To monitor this, you can search/read the transaction log with fn_dblog
A better way is to use extended events and and look for the event sql_server.transaction_log and filter for an operation LOP_DELETE_SPLIT , which is value =11
CREATE EVENT SESSION [BadBadSplits]
ON SERVER
ADD EVENT sqlserver.transaction_log(
WHERE operation = 11 — LOP_DELETE_SPLIT
)
ADD TARGET package0.event_file
(SET filename=N’D:\xe\badbadMyDog.xel’)
GO
Start the extended event:
ALTER EVENT SESSION [BadBadSplits]
ON SERVER
STATE = start;
GO
S
It’s important to monitor the file grow. Due to database activity, your files will accumulate records and this will trigger a grow. Now disk space is not expensive , never the less, you need to know when you are close to the limits of your disk space.
For this , you can use sys.master_files and/or sys.database_files (this is for the current database).
You need some additional objects :
DROP TABLE IF EXISTS dbo.FileSize;
GO
CREATE TABLE dbo.FileSize
(
DBName VarChar(128)
, FileLogicalName VarChar(128)
, FilePhysicalName VarChar(512)
, Size_MB DECIMAL(15,2)
, InsertedDate DATE
, CONSTRAINT pk_FileSize__DBName_FileLogicalName
PRIMARY KEY CLUSTERED (DBName,FileLogicalName,InsertedDate)
);
GO
CREATE OR ALTER PROCEDURE dbo.fileSize_add
(
@i_InsertedDate DATE
)
AS
BEGIN
INSERT INTO dbo.FileSize( DBName, FileLogicalName, FilePhysicalName, Size_MB, InsertedDate)
SELECT
DB_NAME(mf.database_id) as dbName
,mf.name as fileLogicalName
,mf.physical_name as physicalName
,(mf.size * 8 )/1024.0 as fileSize_MB /* why * 8 ? because 8kb is for a page */
,@i_InsertedDate as InsertedDate
FROM sys.master_files as mf
CROSS APPLY sys.dm_os_volume_stats(mf.database_id,mf.file_id) as vs
WHERE
mf.database_id IN( DB_ID(‘db_workspace’) , DB_ID(‘WideWorldImporters’));
END
GO
In a next version, you can also monitor the size of your volume :
SELECT DISTINCT
vs.volume_mount_point as volumeName
,CONVERT(INT, vs.available_bytes / 1048576.0) as freeSpace_MB
,CONVERT(INT, vs.total_bytes / 1048576.0) as TotalSize_MB
FROM sys.master_files as mf
CROSS APPLY sys.dm_os_volume_stats(mf.database_id,mf.file_id) as vs
ORDER BY volumeName
Next, let’s start the monitoring by calling the sp and see what it’s in the target table:
EXEC dbo.FileSize_add @i_InsertedDate = ‘20180816’;
SELECT * FROM dbo.FileSize
DBName FileLogicalName FilePhysicalName Size_MB InsertedDate
db_workspace db_workspace D:\SQL\Databases\DATA\db_workspace.mdf 8.00 2018-08-16
db_workspace db_workspace_log D:\SQL\Databases\DATA\db_workspace_log.ldf 24.00 2018-08-16
WideWorldImporters WWI_InMemory_Data_1 D:\SQL\Databases\DATA\WWI\WideWorldImporters_InMemory_Data_1 0.00 2018-08-16
WideWorldImporters WWI_Log D:\SQL\Databases\DATA\WWI\WideWorldImporters.ldf 100.00 2018-08-16
WideWorldImporters WWI_Primary D:\SQL\Databases\DATA\WWI\WideWorldImporters.mdf 1024.00 2018-08-16
WideWorldImporters WWI_UserData D:\SQL\Databases\DATA\WWI\WideWorldImporters_UserData.ndf 2048.00 2018-08-16
step2: Simulate a database activity:
use db_workspace
go
drop table if exists dbo.test;
create table dbo.test
( idTest int identity(1,1) not null
,cTest char(2000) not null
)
insert into dbo.Test(ctest)
select top (100000) s.name
from sys.all_columns as c
cross join sys.all_columns as s
/*(100000 rows affected)*/
Step3 : next day, monitor again the size:
EXEC dbo.FileSize_add @i_InsertedDate = ‘20180817’;
SELECT DbName,FileLogicalName,Size_MB,InsertedDate FROM dbo.FileSize WHERE InsertedDate = ‘20180817’
DbName FileLogicalName Size_MB InsertedDate
db_workspace db_workspace 200.00 2018-08-17
db_workspace db_workspace_log 344.00 2018-08-17
WideWorldImporters WWI_InMemory_Data_1 0.00 2018-08-17
WideWorldImporters WWI_Log 100.00 2018-08-17
WideWorldImporters WWI_Primary 1024.00 2018-08-17
WideWorldImporters WWI_UserData 2048.00 2018-08-17
Step 4: additional activity:
use db_workspace
go
drop table if exists dbo.test2;
create table dbo.test2
( idTest int identity(1,1) not null
,cTest char(2000) not null
)
insert into dbo.Test2(ctest)
select top (1000) s.name
from sys.all_columns as c
cross join sys.all_columns as s
/*(1000 rows affected)*/
Step5 : next day, monitor again the size:
EXEC dbo.FileSize_add @i_InsertedDate = ‘20180817’;
SELECT DbName,FileLogicalName,Size_MB,InsertedDate FROM dbo.FileSize WHERE InsertedDate = ‘20180817’
DbName FileLogicalName Size_MB InsertedDate
db_workspace db_workspace 264.00 2018-08-18
db_workspace db_workspace_log 344.00 2018-08-18
WideWorldImporters WWI_InMemory_Data_1 0.00 2018-08-18
WideWorldImporters WWI_Log 100.00 2018-08-18
WideWorldImporters WWI_Primary 1024.00 2018-08-18
WideWorldImporters WWI_UserData 2048.00 2018-08-18
Step6: Conclusions:
;WITH Src AS
(
SELECT FileLogicalName
,Size_MB as CurrentSize
,LAG(Size_MB,1,0) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate) AS PrevSize
,LAG(InsertedDate,1,NULL) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate) AS PrevDate
,Size_MB – LAG(Size_MB,1,0) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate) AS DeltaSize
,DATEDIFF(DAY,LAG(InsertedDate,1,NULL) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate),InsertedDate) AS DeltaDays
,DBName
,InsertedDate
,FIRST_VALUE(Size_MB) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate DESC) AS LastValue_Size
,FIRST_VALUE(InsertedDate) OVER(PARTITION BY DbName,FileLogicalName ORDER BY InsertedDate DESC) AS LastValue_Date
FROM dbo.FileSize as FS
WHERE
RIGHT(FilePhysicalName,3) <> ‘ldf’
AND FileLogicalName = ‘db_workspace’
)
SELECT FileLogicalName
,MAX(LastValue_Size) AS CurrentSize
,MAX(LastValue_Date) AS LastReportedDate
,CAST(SYSDATETIME() AS DATE) AS CurrentDate
,AVG(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS AVG_DeltaSize
,MIN(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS minDeltaPerDay
,MAX(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS maxDeltaPerDay
,MAX(LastValue_Size) + 30* MAX(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS Size30Days_Max
,MAX(LastValue_Size) + 30* AVG(DeltaSize/NULLIF(DATEDIFF(DAY,PrevDate,InsertedDate),0)) AS Size30Days_Avg
FROM Src
GROUP BY
FileLogicalName
FileLogicalName CurrentSize LastReportedDate CurrentDate AVG_DeltaSize minDeltaPerDay maxDeltaPerDay Size30Days_Max Size30Days_Avg
db_workspace 264.00 2018-08-18 2018-08-18 128.00 64.00 192.00 6024.00 4104.00
So , in 30 days , keeping the maxim grow in a day, will have size = 6024.00, or by calculating with average grow, the size will be = 4104.00.
And now, you can collaborate with the available space in the volume … and trigger some response, some mail…
Thanks,
S
#t-sql #fileGrow #sqlserver #mssqlserver #sql #databaseGrow
Previous post was about statistics update and cached plans. Moving next, what happens when we have modification
Q: – It will trigger the invalidation of existing execution plan, when there is a modification or what happens when we update the statistics (that have modifications) ?
A: – Let see!
The flow :
Step1,2:
DBCC FreeProcCache;
go
SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(0 rows affected)
SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
SQL Re-Compilations/sec 23
Step3:
EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}
Step4: Check the cached_plan and Re-compilations counter
SELECT cp.usecounts
,cp.plan_handle
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(1 row affected)
usecounts plan_handle
1 0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000
SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
SQL Re-Compilations/sec 23
Step5: do modification
BEGIN TRAN
UPDATE TOP (5000) sales.Customers
SET CustomerName = CustomerName + N’sbse’
ROLLBACK TRAN
(663 rows affected)
Step6: run update statistics
UPDATE STATISTICS Sales.Customers WITH FULLSCAN;
Commands completed successfully.
Step7: check cached_plan and Recompilations counter
SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(1 row affected)
plan_handle
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000
SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
SQL Re-Compilations/sec 23
No changes after the update statistics statement in Re-compilations counter.
Step8: Run , again the sp = Website.SearchForCustomers
EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3
(1 row affected)
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}
Step9: check cached_plan and Re-compilations counter
SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(1 row affected)
bucketid refcounts usecounts size_in_bytes memory_object_address cacheobjtype objtype plan_handle pool_id parent_plan_handle
30650 2 2 303104 0x000001B572C94060 Compiled Plan Proc 0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000 2 NULL
and see the use counts: 2…
usecounts refcounts
2 2
SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
SQL Re-Compilations/sec 24
When the update statistics statement it’s trigger, and have modification in statistics object, the plan it’s not evicted from cache. Can be reused.
The changes are in Re-compilations counter.And this counter is increased NOT after the update statistics statement, but AFTER the execution of the stored procedure/query.
S
What happens when you launch an update statistics for a table?
Q: – It will trigger a recompilation of the sp and then the invalidation of existing execution plan ?
A: – It depends! A plan is evicted from cache when :
When there is no changes, the flow will be:
update statistics —> stop.
When there is changes in statistics :
update statistics —>will trigger a —->plan invalidated —-> recompilation.
No cached plan is marked for a recompilation or marked as invalid. You can use extended events : sqlserver.sql_statement_recompile or sys.dm_os_performance_counters – >SQL Re-Compliations/sec to see what plan is invalidated and triggered for recompilation.
As you can see, the stored procedure it’s only a select sp. So insert/update/delete. So , the table it’s not modified, and there for, the statistics objects are not modified. Key point here : statistics objects are not modified. So no need of an recompilation.
Step1,2:
DBCC FreeProcCache;
go
SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(0 rows affected)
SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
or pc.counter_name = ‘SQL Compilations/sec’
/*SQL Re-Compilations/sec 23 */
In my case Re-Compilations/sec = 23
Step3:
EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}
Step4: Check the cached_plan
SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(1 row affected)
plan_handle
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000
Step5: run update statistics
UPDATE STATISTICS Sales.Customers WITH FULLSCAN;
Commands completed successfully.
Step6: check cached_plan
SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(1 row affected)
plan_handle
0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000
SELECT
counter_name,cntr_value
FROM sys.dm_os_performance_counters as pc
WHERE pc.counter_name = ‘SQL Re-Compilations/sec’
or pc.counter_name = ‘SQL Compilations/sec’
/*SQL Re-Compilations/sec 23 */
The plan is there.
Step7: Run , again the sp = Website.SearchForCustomers
EXEC Website.SearchForCustomers @SearchText =N’Aly’,@MaximumRowsToReturn =3
(1 row affected)
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
{“Customers”:[{“CustomerID”:1056,”CustomerName”:”Kalyani Benjaree”,”PhoneNumber”:”(212) 555-0100″,”FaxNumber”:”(212) 555-0101″,”ct”:[{“CityName”:”Bowmansville”,”p”:[{“PrimaryContactFullName”:”Kalyani Benjaree”,”PrimaryContactPreferredName”:”Kalyani”}]}]}]}
Step8: check cached_plan
SELECT cp.*
FROM sys.dm_exec_cached_plans as cp
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_db
CROSS APPLY sys.dm_exec_plan_attributes(cp.plan_handle) as pa_obj
WHERE
pa_db.attribute =’dbid’
AND pa_db.value =DB_ID(‘WideWorldImporters’)
AND pa_obj.attribute =’objectid’
AND pa_obj.value =OBJECT_ID(‘Website.SearchForCustomers’)
(1 row affected)
bucketid refcounts usecounts size_in_bytes memory_object_address cacheobjtype objtype plan_handle pool_id parent_plan_handle
30650 2 2 303104 0x000001B572C94060 Compiled Plan Proc 0x0500050061562F38304AC17EB501000001000000000000000000000000000000000000000000000000000000 2 NULL
and see the use counts: 2…
usecounts refcounts
2 2
Because no modification on statistics, the plan is not invalidated.
Step9:
Next post, when we have modifications , what happens with recompilation and plan cached ?
#tsql #t-sql #sqlserver #msSqlServer #Recompilation #statistics #update
S