How to minimizeplan ahead the effect of running sp_delete_backuphistory? Measure the gains too!Optimization: Delete with Top and minimize the where clause date rangeHow to automatically stagger transaction log shipping to minimize bandwidth usage peaks?How to avoid using variables in WHERE clauseHow to give the least permissions for CRUD-managing and running SQLServerAgent jobs as well as Maintenance Plans?How to measure or find cost of creating a query plan?how to minimize the effect of updating statistics on the plan cache?A methodology for tracing a query that sporadically runs for hours instead of secondsRunning queries one after the other in psql/scripting environment?Measure Agent Job failure and running jobs with 'execution_status'How to Measure data consumed by an user in Azure sql server database?
Amortized Loans seem to benefit the bank more than the customer
What did the first ever Hunger Games look like?
What is this WWII four-engine plane on skis?
How would you translate Evangelii Nuntiandi?
Statistical tests for benchmark comparison
In what state are satellites left in when they are left in a graveyard orbit?
Transit visa to Hong Kong
Should I inform my future product owner that there are big chances that a team member will leave the company soon?
Why is the return value of the fun function 8 instead of 7?
Can a business put whatever they want into a contract?
Is there a generally agreed upon solution to Bradley's Infinite Regress without appeal to Paraconsistent Logic?
Why is belonging not transitive?
'Overwrote' files, space still occupied, are they lost?
Do transgender people typically get tested for intersex conditions?
geschafft or geschaffen? which one is past participle of schaffen?
What does the Free Recovery sign (UK) actually mean?
Why is the year in this ISO timestamp not 2019?
Nature of Craving in Charm and Impressing Others
What if I don't know whether my program will be linked to a GPL library or not?
What was the ultimate objective of The Party in 1984?
Are there objective criteria for classifying consonance v. dissonance?
Compiling Blender with Optix on Ubuntu
Seven Places at Once - Another Google Earth Challenge?
In what sequence should an advanced civilization teach technology to medieval society to maximize rate of adoption?
How to minimizeplan ahead the effect of running sp_delete_backuphistory? Measure the gains too!
Optimization: Delete with Top and minimize the where clause date rangeHow to automatically stagger transaction log shipping to minimize bandwidth usage peaks?How to avoid using variables in WHERE clauseHow to give the least permissions for CRUD-managing and running SQLServerAgent jobs as well as Maintenance Plans?How to measure or find cost of creating a query plan?how to minimize the effect of updating statistics on the plan cache?A methodology for tracing a query that sporadically runs for hours instead of secondsRunning queries one after the other in psql/scripting environment?Measure Agent Job failure and running jobs with 'execution_status'How to Measure data consumed by an user in Azure sql server database?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
add a comment
|
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– Marcello Miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
add a comment
|
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
while running the following query:
-- DATEADD (datepart , number , date )
DECLARE @dt DATETIME
SELECT @dt = DATEADD(month,-6,getdate())
select @dt
EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = @dt
I get lots of locks and blocks, possibly because this command has not been run for a while, if ever.
Is there a way to find out how much is there to delete in each of the involved tables, before I actually run or plan to run this command?
I use dateadd to calculate 6 months.
Within sp_delete_backuphistory the following tables are trimmed:
sp_delete_backuphistory must be run from the msdb database and affects
the following tables:
backupfile
backupfilegroup
backupmediafamily
backupmediaset
backupset
restorefile
restorefilegroup
restorehistory
sql-server backup delete scripting monitoring
sql-server backup delete scripting monitoring
asked Mar 28 at 11:49
Marcello MiorelliMarcello Miorelli
6,80226 gold badges79 silver badges174 bronze badges
6,80226 gold badges79 silver badges174 bronze badges
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– Marcello Miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
add a comment
|
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– Marcello Miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
2
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– Marcello Miorelli
Mar 28 at 12:28
@EzLo thank you for the link MSDB Performance Tuning
– Marcello Miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34
add a comment
|
2 Answers
2
active
oldest
votes
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
add a comment
|
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233375%2fhow-to-minimize-plan-ahead-the-effect-of-running-sp-delete-backuphistory-measur%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
add a comment
|
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
add a comment
|
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
My gripes with this proc go back a long way:
- The Annals of Hilariously Bad Code, Part 1: Critique the Code
- The Annals of Hilariously Bad Code, Part 2
The problem you run into when deleting large amounts of data is the crappy estimate you get from the table variables.
I've had pretty good luck creating a new version of the proc using temp tables. You could also try just adding recompile hints, but hey, this way we get useful indexes.
As a side note: if you still run into this blocking because this is running long, you can try either removing the transaction code, or changing it to encapsulate each individual delete (though at that point the benefits are negligible).
CREATE PROCEDURE [dbo].[sp_delete_backuphistory_pro]
@oldest_date datetime
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #backup_set_id (backup_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #media_set_id (media_set_id INT PRIMARY KEY CLUSTERED)
CREATE TABLE #restore_history_id (restore_history_id INT PRIMARY KEY CLUSTERED)
INSERT INTO #backup_set_id WITH (TABLOCKX) (backup_set_id)
SELECT DISTINCT backup_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #media_set_id WITH (TABLOCKX) (media_set_id)
SELECT DISTINCT media_set_id
FROM msdb.dbo.backupset
WHERE backup_finish_date < @oldest_date
INSERT INTO #restore_history_id WITH (TABLOCKX) (restore_history_id)
SELECT DISTINCT restore_history_id
FROM msdb.dbo.restorehistory
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
BEGIN TRANSACTION
DELETE FROM msdb.dbo.backupfile
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupfilegroup
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefile
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorefilegroup
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.restorehistory
WHERE restore_history_id IN (SELECT restore_history_id
FROM #restore_history_id)
IF (@@error > 0)
GOTO Quit
DELETE FROM msdb.dbo.backupset
WHERE backup_set_id IN (SELECT backup_set_id
FROM #backup_set_id)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediafamily
FROM msdb.dbo.backupmediafamily bmf
WHERE bmf.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bmf.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
DELETE msdb.dbo.backupmediaset
FROM msdb.dbo.backupmediaset bms
WHERE bms.media_set_id IN (SELECT media_set_id
FROM #media_set_id)
AND ((SELECT COUNT(*)
FROM msdb.dbo.backupset
WHERE media_set_id = bms.media_set_id) = 0)
IF (@@error > 0)
GOTO Quit
COMMIT TRANSACTION
RETURN
Quit:
ROLLBACK TRANSACTION
END
edited Mar 28 at 12:57
answered Mar 28 at 12:47
Erik DarlingErik Darling
27.5k13 gold badges84 silver badges139 bronze badges
27.5k13 gold badges84 silver badges139 bronze badges
add a comment
|
add a comment
|
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
add a comment
|
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
add a comment
|
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
Here is something you could try.
- Restore a backup of your
MSDB
database to a test server and call it
something likeMSDB_TEST
. - Once restored, go into the
sp_delete_backuphistory
stored procedure
in theMSDB_TEST
database and search/replacemsdb.
withmsdb_test.
and alter it. - Capture the current row count of the tables you are interested in.
- Now, run the altered version of the
sp_delete_backuphistory
stored procedure in theMSDB_TEST
database. - Compare the current row counts to the previously capture ones.
answered Mar 28 at 12:11
Scott HodginScott Hodgin
19.6k2 gold badges17 silver badges37 bronze badges
19.6k2 gold badges17 silver badges37 bronze badges
add a comment
|
add a comment
|
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233375%2fhow-to-minimize-plan-ahead-the-effect-of-running-sp-delete-backuphistory-measur%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
The msdb database doesn't have indexes on the system tables, so if you had many backup/restore operations over time, purging records might take long. I don't know how to see the records to delete beforehand, but you can try creating indexes as suggested in the following post and doing your purge in batches (with very old dates first, then closer to last 6 months). weblogs.sqlteam.com/geoffh/2008/01/21/msdb-performance-tuning
– EzLo
Mar 28 at 12:12
@EzLo thank you for the link MSDB Performance Tuning
– Marcello Miorelli
Mar 28 at 12:28
@marcellomiorelli, just a thought, get the estimated execution plan and check before each insert to table variable you can get the estimated number rows, but not guaranteed but a good guess.
– Biju jose
Mar 28 at 13:34