Check DB

How to find the last good known checkDB date?

Starting with SQL Server 2016 SP2, we have DATABASEPROPERTYEX (‘dbname’ , ‘LastGoodCheckDbTime’ )

In SQL Server 2017, CU14 ( I tested it) , this is also working.

Like so:

SELECT DATABASEPROPERTYEX (‘db_workspace’ , ‘LastGoodCheckDbTime’ )

or

SELECT DATABASEPROPERTYEX (d.name , ‘LastGoodCheckDbTime’ )

FROM sys.databases as d

 

Value of  `1900-01-01 00:00:00.000` means – no check db was  run.
Please, see here more info
S

New in 2019

Sql server 2019, in this moment, is still in CTP.

There is a new DMF – sys.dm_exec_query_plan_stats  , starting with 2.4CTP .

This DMF allow you to access the last execution plan, last ACTUAL execution plan for any query (with statistics). With out doing a thing(allmost).

You must have a trace flag activated – TF 2451 –  which allows you to access light weight profiling tech. (No need to restart the instance)

More details are here.

 

S

 

 

 

 

Temp table cache 2

First test  DROP TABLE statement vs no DROP TABLE statement.

 

I have two stored procedures :

use WideWorldImporters
Go

CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_WithDROP
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_WithDrop
(InvoiceID int not null
,InvoiceMsg varchar(100) not null);

INSERT INTO #temp_WithDrop(InvoiceID,InvoiceMsg)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
FROM Sales.Invoices;

/* express drop table statement*/
DROP TABLE #temp_WithDrop;
END
GO
CREATE OR ALTER PROCEDURE dbo.temp_table_cache_test_noDROP
AS
BEGIN
SET NOCOUNT ON;
CREATE TABLE #temp_noDrop
(InvoiceID int not null
,InvoiceMsg varchar(100) not null);

INSERT INTO #temp_noDrop(InvoiceID,InvoiceMsg)
SELECT TOP(10000) InvoiceID
, CAST(InvoiceID as VARCHAR(30))
+ ‘-‘ + CAST(NEWID()as VARCHAR(50))
FROM Sales.Invoices;

–DROP TABLE #temp_noDrop;
END
GO

 

Now, let’s call them and see , with the help of a dmv sys.dm_os_memory_cache_counters

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;

 

 

DBCC FREEPROCCACHE;
GO

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;

Exec dbo.temp_table_cache_test_WithDROP;

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

 

output of it :

entries_count
0

entries_count
1

 
/* now with out DROP */
DBCC FREEPROCCACHE;
GO

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;

Exec dbo.temp_table_cache_test_noDROP;

select entries_count
from sys.dm_os_memory_cache_counters
where name = N’Temporary Tables & Table Variables’;
GO

 

the same output:

entries_count
0

entries_count
1

 

So, DROP TABLE or not , will not prevent caching.

 

 

S

Temp Table cache

When you write code that it’s using temp table, the key point here is the ability to cache the object.

When an object(temp table) it’s cached , then DROP TABLE statement, behind , will rename the table and next CREATE TABLE will bring back the right name.

There are some points that will inhibit , the caching:

  • a named constraints
  • a DDL statement after the initial create statement
  • have with Recompile

 

Some myths about caching:

  • DROP TABLE  will not prevent
  • UPDATE STATISTICS will not prevent
  • TRUNCATE TABLE also , will not prevent

But – manual creation of statistics will do prevent

See more here

 

Now let’s test:

What should we test ?

  • DROP TABLE statement vs no DROP TABLE statement
  • altering table vs creating DDL from start
  • adding index vs DDL statement with inline index

 

S

 

 

 

Page Split

Page split occurs when there is not enough space in a particular data page to insert/update a specific record. There are 3 ways that generate Page splits:

  • End/Sequential/Identity Insert  = good page split = when we insert at the end of the table . ex: imagine you have an identity field  which is also PK and clustered index , and all new records would have id+1
  • Mid lvl table Inserts = bad page split
  • Updates = bad page split = when we’re updating a record on a page that don’t have enough space to accommodate the extra space. For this needs extra space and voila , page split

For bad page split, this is bad because there is additional information that needs to be save in the transaction log.

  • adding a new page
  • move half of records to the new page (insert into the new page and delete from old page  = mark it as delete )
  • update link pointers in the pages

 

Here is a sample of it:

We have a simple table , with id field clustered :

–create a table with an id and some string , so that 2 records will be stored in a page
drop table if exists dbo.pageSplit;
create table dbo.pageSplit
(
id int not null,
myChar char(3500) not null,
constraint pk_pageSplit_id PRIMARY KEY CLUSTERED(id)
)
go

Now ,let start inserting one record, starting with value of id = 3 , and see the log data generated.

begin transaction
insert into dbo.pageSplit(id,myChar)
values(3,REPLICATE(‘3’,3500));

select database_transaction_log_bytes_used from sys.dm_tran_database_transactions where database_id = DB_ID(‘db_workspace’)
–3796
commit transaction

 

Log data generated

pageSplit_Insert_Value_3

 

Next step, insert value 2:

begin transaction
insert into dbo.pageSplit(id,myChar)
values(2,REPLICATE(‘2’,3500));

select database_transaction_log_bytes_used from sys.dm_tran_database_transactions where database_id = DB_ID(‘db_workspace’)
–3756
commit transaction

pageSplit_Insert_Value_2

pageSplit_Insert_Value_2_after

 

Step 3, insert a record with value 1.

begin transaction
insert into dbo.pageSplit(id,myChar)
values(1,REPLICATE(‘1’,3500));

select database_transaction_log_bytes_used from sys.dm_tran_database_transactions where database_id = DB_ID(‘db_workspace’)
–5784
commit transaction

Log data generated is 5784, almost double (70%) .wow…

You can monitor it with perform counters:

SQLServer:AccessMethods:Page Splits/Sec

Or from dmvs:

select cntr_value
from sys.dm_os_performance_counters
where counter_name = ‘Page Splits/sec’

Code here

S

 

 

 

 

 

 

 

Transaction Log informations

To get information about transaction log, usually  we use :

DBCC SQLPerf(LogSpace) = how much log space it’s used

DBCC LogInfo = information about vlf virtual log files

 

But there are new dmvs ,that can do the same things maybe more:

for DBCC SQLPerf , we can use sys.dm_db_log_space_usage
for DBCC LogInfo , we can use sys.dm_db_log_info
also can be used sys.dm_db_log_stats

 

 

S