I recently wanted a script to tell me that for every database on a given server
What levels of backups I have
How many files would need to be restored to get to the most recent backup state.
The size of all the files I’d need to restore
How up to date this process could get me
For example, if we have a database with the following backup schedule
23:00 - Full Backup
06:00, 12:00, 18:00 - Differential Backups
Every 15 Minutes Log Backup
I wanted to know given the backups I have at the time I run this script how up to date I could restore to and how many files would be involved in the restore. The output of this new procedure looks like this…
This information will only show child items after the last parent item in the chain, for example
Only differential backups created after the last full backup
Only logs backups after the last differential or if there is no differential it will fall back to after the last full
Given this information we can see that to get up to date we need to
Restore 1 Full Backup
Restore 1 Differential Backup
Restore 1 Log Backup
Doing this will get us to within about 15 minutes of where the database currently is, we can see this by the fact our most recent log backup is 15 minutes old.
I’m going to walk through an example of creating some backups and restoring them with this logic. If you want to skip ahead and just get the backup status script then it’s at the bottom of this post.
I’ve created a database called RandomDB, It has no backup history and none are scheduled. The output of sp_BackupStatus now looks like this…
If we then run a full backup then look at sp_BackupStatus again…
Then let’s run a few differential backups mixed in with a load of transaction log backups (Pauses between just to make the information returned from sp_BackupStatus a little clearer)…
Even though we took 7 log backs this script is letting us know that to get to the latest possible version we only need to 1 full, 1 differential and 1 log…
To confirm this is up to date we can also check our Test table that we created right before the last log backup exists…
One of the things I really like about this as at any point in the day I can run sp_BackupStatus and quickly see the total sizes of the backup files I’d need to restore to get to the latest possible point in time, This can also give a good indication as to how long this process would take.
And now for the sp_BackupStatus script…
Disclaimer : This was cobbled together in an evening and probably has all sorts of bugs. I’ve put a version of it in my scripts repository on GitHub (https://github.com/gavdraper/GavinScripts), feel free to submit issues and pull requests there.
Log shipping is one of the simplest and most bulletproof methods to get SQL Server to replicate data to a different server/location. For the most part, you set it up and don’t need to touch it again, it just works. Out of the box the agent jobs SQL Server sets up for this generates alerts when a backup/restore hasn’t run for a period of time notifying you that there is a problem.
One thing you don’t get however is any nice way to see how up to date each of your databases are on the secondary. With a fairly simple query we can take the database name, last restored time and the backup time of the file we’re restoring to give some useful information.
To make this even more interesting we can add some RPO thresholds to derive a status field…
At the top there are 3 defined RPO thresholds that if the last restored file time falls behind the status field will start to show warnings. From here you could easily setup custom alerts in SQL Server or your monitoring tool of choice to sound alarms when things fall behind.
On my demo server the results look like this…
Do you have any other ways you use to check this information? I’d be interested to hear about alternatives.
Edit : Thanks to LondonDBA in the comments for pointing out the backup_start_date field in the backupset table, which is a much cleaner option to the string manipulation on the filename that I was originally doing.
I’ve been meaning to start a series of posts on “Dipping your toes into the cloud” for a while now, there are a number of things you can do to slowly take advantage of the cloud without having to re-architect your whole on-premise setup. This post will serve as part one of that series.
One of the easiest ways to start leveraging the “cloud” with minimal changes is to start moving your backups to your provider of choice. In this post I’m going to use Azure as SQL Server has built in support for it.
First up we need to log in to the Azure Portal and create a new storage account, the portal UI changes frequently so I’ll avoid too many screenshots. To add a storage account…
Hit the “Create a Resource” button and search for storage account
Give your account a name
Choose a data centre location
For account type choose one of the general purpose ones (Blob storage option will not work for what we’re doing)
Pick a replication strategy
Pick an access tier, I’ll normally use Cold Storage as they won’t be frequently accessed
Next up let’s get the keys required to access this account…
Open the newly created storage account and navigate to the Access Keys menu item
Copy the value in Key 1
We now need to create a credential in our on-premise SQL Server to access this…
We then need to configure our blob container
Go back to your new storage account in the Azure Portal and click on Blobs in the menu
Click the new container button, give it a name and click OK
Click the newly created container
Click properties in the menu
Take a copy of the URL
We now have all we need to backup a database to the new blob storage container, The following TSQL will create a new backup in the blob storage container we just created…
If you then go to back to the Azure Portal and go in to the “Storage Explorer” under the storage resource you can browse to Blob Containers and into your new container where you’ll see the backup you just took…
Finally, let’s restore our backup from Azure to a new on-premise database…