Showing posts with label multiple. Show all posts
Showing posts with label multiple. Show all posts

Tuesday, March 20, 2012

Restoring multiple TRN files to another computer

I have a backup process that does full backups
periodically with transaction logs backed up every 5
minutes. In the event that I need to restore the backup
and apply a large number of TRN files on another computer
(for example if the server dies), is there an easy way?
If I do the restore on the original server, the interface
is easy because it knows about all the recent backups,
but when I do the restore to another server, the backups
are not listed in the interface so it seems I have to
choose "restore from device" and then select the backup
file and each TRN file individually which is very tedious.
Rick Harrison.In EM 2000, you can generate backup history from the contents of the backup
device. I've never used it (I prefer to do RESTORE though TSQL command), but
my guess that the feature was put there for this type of situations.
--
Tibor Karaszi, SQL Server MVP
Archive at:
http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"rick@.knowware.com" <anonymous@.discussions.microsoft.com> wrote in message
news:0bd901c3bb52$2cbc0da0$a501280a@.phx.gbl...
> I have a backup process that does full backups
> periodically with transaction logs backed up every 5
> minutes. In the event that I need to restore the backup
> and apply a large number of TRN files on another computer
> (for example if the server dies), is there an easy way?
> If I do the restore on the original server, the interface
> is easy because it knows about all the recent backups,
> but when I do the restore to another server, the backups
> are not listed in the interface so it seems I have to
> choose "restore from device" and then select the backup
> file and each TRN file individually which is very tedious.
> Rick Harrison.|||We recently recovered from a total disk array failure to
another SQL server. We were fortunate enough to have all
the .BAK & .TRN files on tape. But without a little
program I wrote to automate the restore, it would have
taken an unreasonable amount of time to apply each TLog
using the GUI.
If you're interested, I can send you some VB code. It
uses the SQL DMO objects to restore DB's & files. I still
need to finalize some things, because I had to step
through the code & make minor adjustments to restore
everything. But I'm supposed to finalize the tool soon.
If you have VB knowledge, or someone at your company does,
you could take what I have now & make alterations to suit
your needs. Just let me know,
Gene Daigle
>--Original Message--
>I have a backup process that does full backups
>periodically with transaction logs backed up every 5
>minutes. In the event that I need to restore the backup
>and apply a large number of TRN files on another computer
>(for example if the server dies), is there an easy way?
>If I do the restore on the original server, the interface
>is easy because it knows about all the recent backups,
>but when I do the restore to another server, the backups
>are not listed in the interface so it seems I have to
>choose "restore from device" and then select the backup
>file and each TRN file individually which is very tedious.
> Rick Harrison.
>.
>

Restoring multiple transaction logs on another server

Dear knowledgeable people,
Is there a simple way to restore the latest full backup, plus all the transaction
log backups on to another server. I know you can on the same server, 'cause
all the backup history is there in MSDB. But how many people back msdb up
every 15 minutes?
Consider the case:
Full Backup of Database @. 0300 each day
T.log backups every 15 minutes
Crash at 19:00 takes out entire server, but not the backup files 'cause
you cunningly configured File Replication Service to copy them elsewhere.
When restoring, do you really have to type 60+ "restore log ... with no recovery"
commands (without typos) to get the database back?
i.e Should I polish up the following script ( error handling / differential backups ...)
and/or write my own .exe to intelligently parse a backup directory, or am I
totally missing the point? ( Me programmer not normally sysadmin ).
Regards
AJ
CREATE procedure spAJGRestoreAsFarAsYouCan
(
@.Path varchar(200) = 'c:\mssql\backup\',
@.DBName varchar(30) = 'ALC',
@.DoItOrJustPrint char(1) = 'Y'
)
as
set nocount on
-- Get list of files in directory
declare @.SqlCmd varchar(300)
set @.SqlCmd = 'dir ' + @.Path + '*.* /a-d /o:d /t:c /n /B'
create table #DirList
(
RowID int identity,
FileOnly varchar(400)
)
insert #DirList ( FileOnly )
exec master..xp_cmdshell @.SqlCmd
-- Last full backup ( .bak )
declare @.FullBackup varchar(300)
declare @.CurrentID int
Select top 1
@.CurrentID = RowID,
@.FullBackup = @.Path + FileOnly
from #DirList
where patindex( '%.bak%', FileOnly) > 0
order by RowID desc
-- if @.FullBackup is NULL return -1
Set @.SqlCmd = 'restore database [' + @.DBName + '] from disk = ''' + @.FullBackup + ''' with norecovery'
print @.SQlCmd
if @.DoItOrJustPrint = 'Y'
exec(@.SQLCmd)
Declare @.TransLog varchar(300)
Set @.TransLog = 'dummy'
While @.TransLog is Not NULL
Begin
Set @.TransLog = NULL
Select top 1
@.TransLog = @.Path + FileOnly,
@.CurrentID = RowID
from #DirList
where patindex( '%.trn%', FileOnly) > 0
and RowID > @.CurrentID
order by RowID
if @.TransLog is not NULL
Begin
Set @.SqlCmd = 'restore log [' + @.DBName + '] from disk = ''' + @.TransLog + ''' with norecovery'
print @.SQlCmd
if @.DoItOrJustPrint = 'Y'
exec(@.SQLCmd)
End
End
Set @.SqlCmd = 'restore database [' + @.DBName + '] with recovery'
print @.SQlCmd
if @.DoItOrJustPrint = 'Y'
exec(@.SQLCmd)
drop table #DirList
return (0)I'd just polish the script. I'd like to point out the following, though:
* You may want to change dir /t:c to dir /t:w
* If your directory may have files other than the database/log backup files,
you may want to check
each file with RESTORE HEADERONLY to make sure that it's a backup file
* Instead of relying on the file date/time to determine the order of the
restore, you can also use
RESTORE HEADERONLY to make sure that the order is correct.
* Using RESTORE HEADERONLY, you can even determine whether you have gaps in
your log
backups by inspecting the LSN's (log sequence numbers)
--
Linchi Shea
linchi_shea@.NOSPAMml.com
"Andrew John" <aj@.DELETEmistrose.com> wrote in message
news:O5yClz$fDHA.616@.TK2MSFTNGP11.phx.gbl...
> Dear knowledgeable people,
> Is there a simple way to restore the latest full backup, plus all the
transaction
> log backups on to another server. I know you can on the same server,
'cause
> all the backup history is there in MSDB. But how many people back msdb up
> every 15 minutes?
> Consider the case:
> Full Backup of Database @. 0300 each day
> T.log backups every 15 minutes
> Crash at 19:00 takes out entire server, but not the backup files 'cause
> you cunningly configured File Replication Service to copy them elsewhere.
> When restoring, do you really have to type 60+ "restore log ... with no
recovery"
> commands (without typos) to get the database back?
> i.e Should I polish up the following script ( error handling /
differential backups ...)
> and/or write my own .exe to intelligently parse a backup directory, or am
I
> totally missing the point? ( Me programmer not normally sysadmin ).
> Regards
> AJ
> CREATE procedure spAJGRestoreAsFarAsYouCan
> (
> @.Path varchar(200) = 'c:\mssql\backup\',
> @.DBName varchar(30) = 'ALC',
> @.DoItOrJustPrint char(1) = 'Y'
> )
> as
> set nocount on
> -- Get list of files in directory
> declare @.SqlCmd varchar(300)
> set @.SqlCmd = 'dir ' + @.Path + '*.* /a-d /o:d /t:c /n /B'
> create table #DirList
> (
> RowID int identity,
> FileOnly varchar(400)
> )
> insert #DirList ( FileOnly )
> exec master..xp_cmdshell @.SqlCmd
> -- Last full backup ( .bak )
> declare @.FullBackup varchar(300)
> declare @.CurrentID int
> Select top 1
> @.CurrentID = RowID,
> @.FullBackup = @.Path + FileOnly
> from #DirList
> where patindex( '%.bak%', FileOnly) > 0
> order by RowID desc
> -- if @.FullBackup is NULL return -1
> Set @.SqlCmd = 'restore database [' + @.DBName + '] from disk = ''' +
@.FullBackup + ''' with norecovery'
> print @.SQlCmd
> if @.DoItOrJustPrint = 'Y'
> exec(@.SQLCmd)
> Declare @.TransLog varchar(300)
> Set @.TransLog = 'dummy'
> While @.TransLog is Not NULL
> Begin
> Set @.TransLog = NULL
> Select top 1
> @.TransLog = @.Path + FileOnly,
> @.CurrentID = RowID
> from #DirList
> where patindex( '%.trn%', FileOnly) > 0
> and RowID > @.CurrentID
> order by RowID
> if @.TransLog is not NULL
> Begin
> Set @.SqlCmd = 'restore log [' + @.DBName + '] from disk = ''' +
@.TransLog + ''' with norecovery'
> print @.SQlCmd
> if @.DoItOrJustPrint = 'Y'
> exec(@.SQLCmd)
> End
> End
>
> Set @.SqlCmd = 'restore database [' + @.DBName + '] with recovery'
> print @.SQlCmd
> if @.DoItOrJustPrint = 'Y'
> exec(@.SQLCmd)
> drop table #DirList
> return (0)
>|||Andrew
Yeh, you need to be prepared for such situations.
I would write stored procedure that accept tree parameters
(dbname,from_hour,to_hour)
You need to keep transaction backups with appropriatte file name like
my_log1200 ,next my_log1215.
"Andrew John" <aj@.DELETEmistrose.com> wrote in message
news:O5yClz$fDHA.616@.TK2MSFTNGP11.phx.gbl...
> Dear knowledgeable people,
> Is there a simple way to restore the latest full backup, plus all the
transaction
> log backups on to another server. I know you can on the same server,
'cause
> all the backup history is there in MSDB. But how many people back msdb up
> every 15 minutes?
> Consider the case:
> Full Backup of Database @. 0300 each day
> T.log backups every 15 minutes
> Crash at 19:00 takes out entire server, but not the backup files 'cause
> you cunningly configured File Replication Service to copy them elsewhere.
> When restoring, do you really have to type 60+ "restore log ... with no
recovery"
> commands (without typos) to get the database back?
> i.e Should I polish up the following script ( error handling /
differential backups ...)
> and/or write my own .exe to intelligently parse a backup directory, or am
I
> totally missing the point? ( Me programmer not normally sysadmin ).
> Regards
> AJ
> CREATE procedure spAJGRestoreAsFarAsYouCan
> (
> @.Path varchar(200) = 'c:\mssql\backup\',
> @.DBName varchar(30) = 'ALC',
> @.DoItOrJustPrint char(1) = 'Y'
> )
> as
> set nocount on
> -- Get list of files in directory
> declare @.SqlCmd varchar(300)
> set @.SqlCmd = 'dir ' + @.Path + '*.* /a-d /o:d /t:c /n /B'
> create table #DirList
> (
> RowID int identity,
> FileOnly varchar(400)
> )
> insert #DirList ( FileOnly )
> exec master..xp_cmdshell @.SqlCmd
> -- Last full backup ( .bak )
> declare @.FullBackup varchar(300)
> declare @.CurrentID int
> Select top 1
> @.CurrentID = RowID,
> @.FullBackup = @.Path + FileOnly
> from #DirList
> where patindex( '%.bak%', FileOnly) > 0
> order by RowID desc
> -- if @.FullBackup is NULL return -1
> Set @.SqlCmd = 'restore database [' + @.DBName + '] from disk = ''' +
@.FullBackup + ''' with norecovery'
> print @.SQlCmd
> if @.DoItOrJustPrint = 'Y'
> exec(@.SQLCmd)
> Declare @.TransLog varchar(300)
> Set @.TransLog = 'dummy'
> While @.TransLog is Not NULL
> Begin
> Set @.TransLog = NULL
> Select top 1
> @.TransLog = @.Path + FileOnly,
> @.CurrentID = RowID
> from #DirList
> where patindex( '%.trn%', FileOnly) > 0
> and RowID > @.CurrentID
> order by RowID
> if @.TransLog is not NULL
> Begin
> Set @.SqlCmd = 'restore log [' + @.DBName + '] from disk = ''' +
@.TransLog + ''' with norecovery'
> print @.SQlCmd
> if @.DoItOrJustPrint = 'Y'
> exec(@.SQLCmd)
> End
> End
>
> Set @.SqlCmd = 'restore database [' + @.DBName + '] with recovery'
> print @.SQlCmd
> if @.DoItOrJustPrint = 'Y'
> exec(@.SQLCmd)
> drop table #DirList
> return (0)
>|||Thank you all for your comments.
I'm glad to see I wasn't completely off track.
I am a bit surprised with t:c vs t:w. Wouldn't that
cause issues if the full backup took more than 15minutes ?
( It wont in my case - but I did think of that ).
What I am trying to write is something that can be run
in the wee hours of the morning, by some overstressed
service technician, without a microscopic examination of
file names and a touch typing course.
Pretty please Mr Microsoft - It wouldn't be too hard to put
some MSDB data into the T.Log backups would it?
Thanks Again
AJ
"Linchi Shea" <linchi_shea@.NOSPAMml.com> wrote in message news:uygo6aAgDHA.1648@.TK2MSFTNGP09.phx.gbl...
> I'd just polish the script. I'd like to point out the following, though:
> * You may want to change dir /t:c to dir /t:w
> * If your directory may have files other than the database/log backup files,
> you may want to check
> each file with RESTORE HEADERONLY to make sure that it's a backup file
> * Instead of relying on the file date/time to determine the order of the
> restore, you can also use
> RESTORE HEADERONLY to make sure that the order is correct.
> * Using RESTORE HEADERONLY, you can even determine whether you have gaps in
> your log
> backups by inspecting the LSN's (log sequence numbers)
> --
> Linchi Shea
> linchi_shea@.NOSPAMml.com
>
> "Andrew John" <aj@.DELETEmistrose.com> wrote in message
> news:O5yClz$fDHA.616@.TK2MSFTNGP11.phx.gbl...
> > Dear knowledgeable people,
> >
> > Is there a simple way to restore the latest full backup, plus all the
> transaction
> > log backups on to another server. I know you can on the same server,
> 'cause
> > all the backup history is there in MSDB. But how many people back msdb up
> > every 15 minutes?
> >
> > Consider the case:
> >
> > Full Backup of Database @. 0300 each day
> > T.log backups every 15 minutes
> > Crash at 19:00 takes out entire server, but not the backup files 'cause
> > you cunningly configured File Replication Service to copy them elsewhere.
> >
> > When restoring, do you really have to type 60+ "restore log ... with no
> recovery"
> > commands (without typos) to get the database back?
> >
> > i.e Should I polish up the following script ( error handling /
> differential backups ...)
> > and/or write my own .exe to intelligently parse a backup directory, or am
> I
> > totally missing the point? ( Me programmer not normally sysadmin ).
> >
> > Regards
> > AJ
> >
> > CREATE procedure spAJGRestoreAsFarAsYouCan
> > (
> > @.Path varchar(200) = 'c:\mssql\backup\',
> > @.DBName varchar(30) = 'ALC',
> > @.DoItOrJustPrint char(1) = 'Y'
> > )
> > as
> >
> > set nocount on
> > -- Get list of files in directory
> > declare @.SqlCmd varchar(300)
> > set @.SqlCmd = 'dir ' + @.Path + '*.* /a-d /o:d /t:c /n /B'
> >
> > create table #DirList
> > (
> > RowID int identity,
> > FileOnly varchar(400)
> > )
> >
> > insert #DirList ( FileOnly )
> > exec master..xp_cmdshell @.SqlCmd
> >
> > -- Last full backup ( .bak )
> > declare @.FullBackup varchar(300)
> > declare @.CurrentID int
> >
> > Select top 1
> > @.CurrentID = RowID,
> > @.FullBackup = @.Path + FileOnly
> > from #DirList
> > where patindex( '%.bak%', FileOnly) > 0
> > order by RowID desc
> >
> > -- if @.FullBackup is NULL return -1
> >
> > Set @.SqlCmd = 'restore database [' + @.DBName + '] from disk = ''' +
> @.FullBackup + ''' with norecovery'
> > print @.SQlCmd
> > if @.DoItOrJustPrint = 'Y'
> > exec(@.SQLCmd)
> >
> > Declare @.TransLog varchar(300)
> > Set @.TransLog = 'dummy'
> > While @.TransLog is Not NULL
> > Begin
> >
> > Set @.TransLog = NULL
> > Select top 1
> > @.TransLog = @.Path + FileOnly,
> > @.CurrentID = RowID
> > from #DirList
> > where patindex( '%.trn%', FileOnly) > 0
> > and RowID > @.CurrentID
> > order by RowID
> >
> > if @.TransLog is not NULL
> > Begin
> > Set @.SqlCmd = 'restore log [' + @.DBName + '] from disk = ''' +
> @.TransLog + ''' with norecovery'
> > print @.SQlCmd
> > if @.DoItOrJustPrint = 'Y'
> > exec(@.SQLCmd)
> > End
> > End
> >
> >
> > Set @.SqlCmd = 'restore database [' + @.DBName + '] with recovery'
> > print @.SQlCmd
> > if @.DoItOrJustPrint = 'Y'
> > exec(@.SQLCmd)
> >
> > drop table #DirList
> >
> > return (0)
> >
> >
>|||> I always like using backup / restore for version control as it
> controls dev data versions along with schema objects etc.
IMHO, DDL (and selected DML) belongs in a version control system. This
facilitates new application installs as well as upgrades. Test data can
also be stored but its not as important to keep that under version
control.
--
Hope this helps.
Dan Guzman
SQL Server MVP
--
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:eWKx7SAgDHA.1748@.TK2MSFTNGP10.phx.gbl...
> Hi Andrew.
> I regularly need to do this on software dev projects for testing,
release
> etc.. I always like using backup / restore for version control as it
> controls dev data versions along with schema objects etc. So, it
sounds like
> my needs are similar to yours. (c:
> My approach is to use similar scripts / app logic, but there's a fair
bit of
> complexity as there's a full app with additional features not relevant
to
> your question, so I can't easily post scripts as such.
> The feature that I use though is to restore file headers, extract LSNs
&
> apply in order, which works whether the files are .bak, .diff or .trn.
I've
> found that use of .diffs mitigate your "60+ retore log statements"
problem
> significantly. Perhaps you could adjust your script to include .diff,
> restore headers & retore log / diff / db in LSN order?
> HTH
> Regards,
> Greg Linwood
> SQL Server MVP
> "Andrew John" <aj@.DELETEmistrose.com> wrote in message
> news:O5yClz$fDHA.616@.TK2MSFTNGP11.phx.gbl...
> > Dear knowledgeable people,
> >
> > Is there a simple way to restore the latest full backup, plus all
the
> transaction
> > log backups on to another server. I know you can on the same
server,
> 'cause
> > all the backup history is there in MSDB. But how many people back
msdb up
> > every 15 minutes?
> >
> > Consider the case:
> >
> > Full Backup of Database @. 0300 each day
> > T.log backups every 15 minutes
> > Crash at 19:00 takes out entire server, but not the backup files
'cause
> > you cunningly configured File Replication Service to copy them
elsewhere.
> >
> > When restoring, do you really have to type 60+ "restore log ...
with no
> recovery"
> > commands (without typos) to get the database back?
> >
> > i.e Should I polish up the following script ( error handling /
> differential backups ...)
> > and/or write my own .exe to intelligently parse a backup directory,
or am
> I
> > totally missing the point? ( Me programmer not normally sysadmin ).
> >
> > Regards
> > AJ
> >
> > CREATE procedure spAJGRestoreAsFarAsYouCan
> > (
> > @.Path varchar(200) = 'c:\mssql\backup\',
> > @.DBName varchar(30) = 'ALC',
> > @.DoItOrJustPrint char(1) = 'Y'
> > )
> > as
> >
> > set nocount on
> > -- Get list of files in directory
> > declare @.SqlCmd varchar(300)
> > set @.SqlCmd = 'dir ' + @.Path + '*.* /a-d /o:d /t:c /n /B'
> >
> > create table #DirList
> > (
> > RowID int identity,
> > FileOnly varchar(400)
> > )
> >
> > insert #DirList ( FileOnly )
> > exec master..xp_cmdshell @.SqlCmd
> >
> > -- Last full backup ( .bak )
> > declare @.FullBackup varchar(300)
> > declare @.CurrentID int
> >
> > Select top 1
> > @.CurrentID = RowID,
> > @.FullBackup = @.Path + FileOnly
> > from #DirList
> > where patindex( '%.bak%', FileOnly) > 0
> > order by RowID desc
> >
> > -- if @.FullBackup is NULL return -1
> >
> > Set @.SqlCmd = 'restore database [' + @.DBName + '] from disk = ''' +
> @.FullBackup + ''' with norecovery'
> > print @.SQlCmd
> > if @.DoItOrJustPrint = 'Y'
> > exec(@.SQLCmd)
> >
> > Declare @.TransLog varchar(300)
> > Set @.TransLog = 'dummy'
> > While @.TransLog is Not NULL
> > Begin
> >
> > Set @.TransLog = NULL
> > Select top 1
> > @.TransLog = @.Path + FileOnly,
> > @.CurrentID = RowID
> > from #DirList
> > where patindex( '%.trn%', FileOnly) > 0
> > and RowID > @.CurrentID
> > order by RowID
> >
> > if @.TransLog is not NULL
> > Begin
> > Set @.SqlCmd = 'restore log [' + @.DBName + '] from disk = ''' +
> @.TransLog + ''' with norecovery'
> > print @.SQlCmd
> > if @.DoItOrJustPrint = 'Y'
> > exec(@.SQLCmd)
> > End
> > End
> >
> >
> > Set @.SqlCmd = 'restore database [' + @.DBName + '] with recovery'
> > print @.SQlCmd
> > if @.DoItOrJustPrint = 'Y'
> > exec(@.SQLCmd)
> >
> > drop table #DirList
> >
> > return (0)
> >
> >
>|||> I also advocate using VSS, this is really just supplemental dev team
> procedure.
I see what you mean now. Backup / restore certainly has its place for
taking snapshots to aid iterative testing. I'm not a big fan of VSS but
there are other better, albeit much more expensive, source control
products out there.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:e3v3LvEgDHA.4024@.TK2MSFTNGP11.phx.gbl...
> Experience has made me (slightly) weary of VSS which is enough to
warrant
> extra protection! So, on my projects, I have been supplementing
version
> control of database objects by implementing fully logged backups (with
> transaction log marks in sql2k or timestamps in earlier versions).
With this
> approach you get bullet-proof backup & recoverability of all db source
/
> objects and data thrown in as well. This approach works on any
platform.
> Assuming the project doesn't have massive test / dev data
requirements, it's
> easy to store these on tape or ide media or just mix into a business's
> normal backup procedures.
> Being able to easily restore dev / test environments to marks or
specific
> points in time is a powerful technique for regression testing. As a
side
> benefit you also get early indications of capacity requirements for
backup /
> restore which a data tier architect will find useful when documenting
> capacity expectations.
> I also advocate using VSS, this is really just supplemental dev team
> procedure.
> Regards,
> Greg Linwood
> SQL Server MVP
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:ez8u7kEgDHA.3896@.tk2msftngp13.phx.gbl...
> > > I always like using backup / restore for version control as it
> > > controls dev data versions along with schema objects etc.
> >
> > IMHO, DDL (and selected DML) belongs in a version control system.
This
> > facilitates new application installs as well as upgrades. Test data
can
> > also be stored but its not as important to keep that under version
> > control.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > --
> > SQL FAQ links (courtesy Neil Pike):
> >
> > http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
> > http://www.sqlserverfaq.com
> > http://www.mssqlserver.com/faq
> > --
> >
> > "Greg Linwood" <g_linwood@.hotmail.com> wrote in message
> > news:eWKx7SAgDHA.1748@.TK2MSFTNGP10.phx.gbl...
> > > Hi Andrew.
> > >
> > > I regularly need to do this on software dev projects for testing,
> > release
> > > etc.. I always like using backup / restore for version control as
it
> > > controls dev data versions along with schema objects etc. So, it
> > sounds like
> > > my needs are similar to yours. (c:
> > >
> > > My approach is to use similar scripts / app logic, but there's a
fair
> > bit of
> > > complexity as there's a full app with additional features not
relevant
> > to
> > > your question, so I can't easily post scripts as such.
> > >
> > > The feature that I use though is to restore file headers, extract
LSNs
> > &
> > > apply in order, which works whether the files are .bak, .diff or
.trn.
> > I've
> > > found that use of .diffs mitigate your "60+ retore log statements"
> > problem
> > > significantly. Perhaps you could adjust your script to include
.diff,
> > > restore headers & retore log / diff / db in LSN order?
> > >
> > > HTH
> > >
> > > Regards,
> > > Greg Linwood
> > > SQL Server MVP
> > >
> > > "Andrew John" <aj@.DELETEmistrose.com> wrote in message
> > > news:O5yClz$fDHA.616@.TK2MSFTNGP11.phx.gbl...
> > > > Dear knowledgeable people,
> > > >
> > > > Is there a simple way to restore the latest full backup, plus
all
> > the
> > > transaction
> > > > log backups on to another server. I know you can on the same
> > server,
> > > 'cause
> > > > all the backup history is there in MSDB. But how many people
back
> > msdb up
> > > > every 15 minutes?
> > > >
> > > > Consider the case:
> > > >
> > > > Full Backup of Database @. 0300 each day
> > > > T.log backups every 15 minutes
> > > > Crash at 19:00 takes out entire server, but not the backup files
> > 'cause
> > > > you cunningly configured File Replication Service to copy them
> > elsewhere.
> > > >
> > > > When restoring, do you really have to type 60+ "restore log ...
> > with no
> > > recovery"
> > > > commands (without typos) to get the database back?
> > > >
> > > > i.e Should I polish up the following script ( error handling /
> > > differential backups ...)
> > > > and/or write my own .exe to intelligently parse a backup
directory,
> > or am
> > > I
> > > > totally missing the point? ( Me programmer not normally
sysadmin ).
> > > >
> > > > Regards
> > > > AJ
> > > >
> > > > CREATE procedure spAJGRestoreAsFarAsYouCan
> > > > (
> > > > @.Path varchar(200) = 'c:\mssql\backup\',
> > > > @.DBName varchar(30) = 'ALC',
> > > > @.DoItOrJustPrint char(1) = 'Y'
> > > > )
> > > > as
> > > >
> > > > set nocount on
> > > > -- Get list of files in directory
> > > > declare @.SqlCmd varchar(300)
> > > > set @.SqlCmd = 'dir ' + @.Path + '*.* /a-d /o:d /t:c /n /B'
> > > >
> > > > create table #DirList
> > > > (
> > > > RowID int identity,
> > > > FileOnly varchar(400)
> > > > )
> > > >
> > > > insert #DirList ( FileOnly )
> > > > exec master..xp_cmdshell @.SqlCmd
> > > >
> > > > -- Last full backup ( .bak )
> > > > declare @.FullBackup varchar(300)
> > > > declare @.CurrentID int
> > > >
> > > > Select top 1
> > > > @.CurrentID = RowID,
> > > > @.FullBackup = @.Path + FileOnly
> > > > from #DirList
> > > > where patindex( '%.bak%', FileOnly) > 0
> > > > order by RowID desc
> > > >
> > > > -- if @.FullBackup is NULL return -1
> > > >
> > > > Set @.SqlCmd = 'restore database [' + @.DBName + '] from disk =''' +
> > > @.FullBackup + ''' with norecovery'
> > > > print @.SQlCmd
> > > > if @.DoItOrJustPrint = 'Y'
> > > > exec(@.SQLCmd)
> > > >
> > > > Declare @.TransLog varchar(300)
> > > > Set @.TransLog = 'dummy'
> > > > While @.TransLog is Not NULL
> > > > Begin
> > > >
> > > > Set @.TransLog = NULL
> > > > Select top 1
> > > > @.TransLog = @.Path + FileOnly,
> > > > @.CurrentID = RowID
> > > > from #DirList
> > > > where patindex( '%.trn%', FileOnly) > 0
> > > > and RowID > @.CurrentID
> > > > order by RowID
> > > >
> > > > if @.TransLog is not NULL
> > > > Begin
> > > > Set @.SqlCmd = 'restore log [' + @.DBName + '] from disk = '''
+
> > > @.TransLog + ''' with norecovery'
> > > > print @.SQlCmd
> > > > if @.DoItOrJustPrint = 'Y'
> > > > exec(@.SQLCmd)
> > > > End
> > > > End
> > > >
> > > >
> > > > Set @.SqlCmd = 'restore database [' + @.DBName + '] with recovery'
> > > > print @.SQlCmd
> > > > if @.DoItOrJustPrint = 'Y'
> > > > exec(@.SQLCmd)
> > > >
> > > > drop table #DirList
> > > >
> > > > return (0)
> > > >
> > > >
> > >
> > >
> >
> >
>

Restoring multiple transaction logs from a single file

Hi all,
I have a home-grown log shipping setup at work which I'm trying to
modify. At present if the standby server cannot apply the logs before
the live server writes a new set (usually because the db cannot be
locked), it fails and we have to do a full resync between the servers.
I've put in a mechanism whereby the status of the last attempt to apply
the logs to the standby database is recorded. If the primary sees that
the last attempt was successful it does a BACKUP LOG ... WITH INIT.
If it sees the last attempt was not successful it does the same but
WITH NOINIT, which I gather appends the new logs onto the end of the
current lot. This way the transaction logs will queue up every hour
until I can solve whatever the issue between the servers is so they can
be applied, thus saving a full 4 hour resync on a 220GB database.
Problem is this. For example, the secondary can't apply logs because
it can't get exclusive lock on the db. It marks that database as out
of sync in a table. Next time the primary backs up logs it sees the
secondary is not up to date so it appends the logs to the last lot
instead of overwriting them (WITH NOINIT). Next time the secondary
does manage to apply the logs. Now I thought that would mean it gets
ALL the logs and applies them, except I get this error message:
Executed as user: STEWARTMILNE\sqlosp1. The log in this backup set
begins at LSN 1247944000000500300001, which is too late to apply to the
database. An earlier log backup that includes LSN
1247592000000017600001 can be restored. [SQLSTATE 42000] (Error 4305)
RESTORE LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013).
The step failed.
Now I know that means that basically there is a gap between the last
log to be applied and the one we're attempting to apply now. What I
don't understand is how that can be since the last lot of logs to be
successfully applied included more than one BACKUP LOG's worth. How
can I tell RESTORE LOG to apply ALL the log backups appended together
instead of just the first ones in the set?
TIA
Niall
> How
> can I tell RESTORE LOG to apply ALL the log backups appended together
> instead of just the first ones in the set?
You can't. So you have to write some code that uses RESTORE HEADERONLY, and based the result does
several RESTORE LOG commands.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<niallporter@.yahoo.co.uk> wrote in message
news:1158846017.059168.76240@.m73g2000cwd.googlegro ups.com...
> Hi all,
> I have a home-grown log shipping setup at work which I'm trying to
> modify. At present if the standby server cannot apply the logs before
> the live server writes a new set (usually because the db cannot be
> locked), it fails and we have to do a full resync between the servers.
> I've put in a mechanism whereby the status of the last attempt to apply
> the logs to the standby database is recorded. If the primary sees that
> the last attempt was successful it does a BACKUP LOG ... WITH INIT.
> If it sees the last attempt was not successful it does the same but
> WITH NOINIT, which I gather appends the new logs onto the end of the
> current lot. This way the transaction logs will queue up every hour
> until I can solve whatever the issue between the servers is so they can
> be applied, thus saving a full 4 hour resync on a 220GB database.
> Problem is this. For example, the secondary can't apply logs because
> it can't get exclusive lock on the db. It marks that database as out
> of sync in a table. Next time the primary backs up logs it sees the
> secondary is not up to date so it appends the logs to the last lot
> instead of overwriting them (WITH NOINIT). Next time the secondary
> does manage to apply the logs. Now I thought that would mean it gets
> ALL the logs and applies them, except I get this error message:
> Executed as user: STEWARTMILNE\sqlosp1. The log in this backup set
> begins at LSN 1247944000000500300001, which is too late to apply to the
> database. An earlier log backup that includes LSN
> 1247592000000017600001 can be restored. [SQLSTATE 42000] (Error 4305)
> RESTORE LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013).
> The step failed.
> Now I know that means that basically there is a gap between the last
> log to be applied and the one we're attempting to apply now. What I
> don't understand is how that can be since the last lot of logs to be
> successfully applied included more than one BACKUP LOG's worth. How
> can I tell RESTORE LOG to apply ALL the log backups appended together
> instead of just the first ones in the set?
> TIA
> Niall
>

Restoring multiple transaction logs from a single file

Hi all,
I have a home-grown log shipping setup at work which I'm trying to
modify. At present if the standby server cannot apply the logs before
the live server writes a new set (usually because the db cannot be
locked), it fails and we have to do a full resync between the servers.
I've put in a mechanism whereby the status of the last attempt to apply
the logs to the standby database is recorded. If the primary sees that
the last attempt was successful it does a BACKUP LOG ... WITH INIT.
If it sees the last attempt was not successful it does the same but
WITH NOINIT, which I gather appends the new logs onto the end of the
current lot. This way the transaction logs will queue up every hour
until I can solve whatever the issue between the servers is so they can
be applied, thus saving a full 4 hour resync on a 220GB database.
Problem is this. For example, the secondary can't apply logs because
it can't get exclusive lock on the db. It marks that database as out
of sync in a table. Next time the primary backs up logs it sees the
secondary is not up to date so it appends the logs to the last lot
instead of overwriting them (WITH NOINIT). Next time the secondary
does manage to apply the logs. Now I thought that would mean it gets
ALL the logs and applies them, except I get this error message:
Executed as user: STEWARTMILNE\sqlosp1. The log in this backup set
begins at LSN 1247944000000500300001, which is too late to apply to the
database. An earlier log backup that includes LSN
1247592000000017600001 can be restored. [SQLSTATE 42000] (Error 4305)
RESTORE LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013).
The step failed.
Now I know that means that basically there is a gap between the last
log to be applied and the one we're attempting to apply now. What I
don't understand is how that can be since the last lot of logs to be
successfully applied included more than one BACKUP LOG's worth. How
can I tell RESTORE LOG to apply ALL the log backups appended together
instead of just the first ones in the set?
TIA
Niall> How
> can I tell RESTORE LOG to apply ALL the log backups appended together
> instead of just the first ones in the set?
You can't. So you have to write some code that uses RESTORE HEADERONLY, and
based the result does
several RESTORE LOG commands.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<niallporter@.yahoo.co.uk> wrote in message
news:1158846017.059168.76240@.m73g2000cwd.googlegroups.com...
> Hi all,
> I have a home-grown log shipping setup at work which I'm trying to
> modify. At present if the standby server cannot apply the logs before
> the live server writes a new set (usually because the db cannot be
> locked), it fails and we have to do a full resync between the servers.
> I've put in a mechanism whereby the status of the last attempt to apply
> the logs to the standby database is recorded. If the primary sees that
> the last attempt was successful it does a BACKUP LOG ... WITH INIT.
> If it sees the last attempt was not successful it does the same but
> WITH NOINIT, which I gather appends the new logs onto the end of the
> current lot. This way the transaction logs will queue up every hour
> until I can solve whatever the issue between the servers is so they can
> be applied, thus saving a full 4 hour resync on a 220GB database.
> Problem is this. For example, the secondary can't apply logs because
> it can't get exclusive lock on the db. It marks that database as out
> of sync in a table. Next time the primary backs up logs it sees the
> secondary is not up to date so it appends the logs to the last lot
> instead of overwriting them (WITH NOINIT). Next time the secondary
> does manage to apply the logs. Now I thought that would mean it gets
> ALL the logs and applies them, except I get this error message:
> Executed as user: STEWARTMILNE\sqlosp1. The log in this backup set
> begins at LSN 1247944000000500300001, which is too late to apply to the
> database. An earlier log backup that includes LSN
> 1247592000000017600001 can be restored. [SQLSTATE 42000] (Error 4305)
> RESTORE LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013).
> The step failed.
> Now I know that means that basically there is a gap between the last
> log to be applied and the one we're attempting to apply now. What I
> don't understand is how that can be since the last lot of logs to be
> successfully applied included more than one BACKUP LOG's worth. How
> can I tell RESTORE LOG to apply ALL the log backups appended together
> instead of just the first ones in the set?
> TIA
> Niall
>

Restoring multiple transaction logs from a single file

Hi all,
I have a home-grown log shipping setup at work which I'm trying to
modify. At present if the standby server cannot apply the logs before
the live server writes a new set (usually because the db cannot be
locked), it fails and we have to do a full resync between the servers.
I've put in a mechanism whereby the status of the last attempt to apply
the logs to the standby database is recorded. If the primary sees that
the last attempt was successful it does a BACKUP LOG ... WITH INIT.
If it sees the last attempt was not successful it does the same but
WITH NOINIT, which I gather appends the new logs onto the end of the
current lot. This way the transaction logs will queue up every hour
until I can solve whatever the issue between the servers is so they can
be applied, thus saving a full 4 hour resync on a 220GB database.
Problem is this. For example, the secondary can't apply logs because
it can't get exclusive lock on the db. It marks that database as out
of sync in a table. Next time the primary backs up logs it sees the
secondary is not up to date so it appends the logs to the last lot
instead of overwriting them (WITH NOINIT). Next time the secondary
does manage to apply the logs. Now I thought that would mean it gets
ALL the logs and applies them, except I get this error message:
Executed as user: STEWARTMILNE\sqlosp1. The log in this backup set
begins at LSN 1247944000000500300001, which is too late to apply to the
database. An earlier log backup that includes LSN
1247592000000017600001 can be restored. [SQLSTATE 42000] (Error 4305)
RESTORE LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013).
The step failed.
Now I know that means that basically there is a gap between the last
log to be applied and the one we're attempting to apply now. What I
don't understand is how that can be since the last lot of logs to be
successfully applied included more than one BACKUP LOG's worth. How
can I tell RESTORE LOG to apply ALL the log backups appended together
instead of just the first ones in the set?
TIA
Niall> How
> can I tell RESTORE LOG to apply ALL the log backups appended together
> instead of just the first ones in the set?
You can't. So you have to write some code that uses RESTORE HEADERONLY, and based the result does
several RESTORE LOG commands.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<niallporter@.yahoo.co.uk> wrote in message
news:1158846017.059168.76240@.m73g2000cwd.googlegroups.com...
> Hi all,
> I have a home-grown log shipping setup at work which I'm trying to
> modify. At present if the standby server cannot apply the logs before
> the live server writes a new set (usually because the db cannot be
> locked), it fails and we have to do a full resync between the servers.
> I've put in a mechanism whereby the status of the last attempt to apply
> the logs to the standby database is recorded. If the primary sees that
> the last attempt was successful it does a BACKUP LOG ... WITH INIT.
> If it sees the last attempt was not successful it does the same but
> WITH NOINIT, which I gather appends the new logs onto the end of the
> current lot. This way the transaction logs will queue up every hour
> until I can solve whatever the issue between the servers is so they can
> be applied, thus saving a full 4 hour resync on a 220GB database.
> Problem is this. For example, the secondary can't apply logs because
> it can't get exclusive lock on the db. It marks that database as out
> of sync in a table. Next time the primary backs up logs it sees the
> secondary is not up to date so it appends the logs to the last lot
> instead of overwriting them (WITH NOINIT). Next time the secondary
> does manage to apply the logs. Now I thought that would mean it gets
> ALL the logs and applies them, except I get this error message:
> Executed as user: STEWARTMILNE\sqlosp1. The log in this backup set
> begins at LSN 1247944000000500300001, which is too late to apply to the
> database. An earlier log backup that includes LSN
> 1247592000000017600001 can be restored. [SQLSTATE 42000] (Error 4305)
> RESTORE LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013).
> The step failed.
> Now I know that means that basically there is a gap between the last
> log to be applied and the one we're attempting to apply now. What I
> don't understand is how that can be since the last lot of logs to be
> successfully applied included more than one BACKUP LOG's worth. How
> can I tell RESTORE LOG to apply ALL the log backups appended together
> instead of just the first ones in the set?
> TIA
> Niall
>

Restoring multiple transaction logs

Can anyone tell me if there is an easy way to restore
multiple transaction logs at the same time as a group from
within the Enterprise console? When I select multiple
(consecutive) transaction logs to restore I receive a
message about the transaction log not being part of "a
multifamily(RAID) media set". There has to be an easier
way.The error message say that you striped the backup, i.e., doesn't have anything to do with applying
multiple logs in sequence.
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as ugroup=microsoft.public.sqlserver
"Jim" <brutjimmy@.yahoo.com> wrote in message news:041201c38391$a1798c60$a401280a@.phx.gbl...
> Can anyone tell me if there is an easy way to restore
> multiple transaction logs at the same time as a group from
> within the Enterprise console? When I select multiple
> (consecutive) transaction logs to restore I receive a
> message about the transaction log not being part of "a
> multifamily(RAID) media set". There has to be an easier
> way.|||It sounds like you are trying to restore multiple disk backup files together
as if it was a single "backup set".
In EM, under "Restore Database", you should be able to find a dialog that
lets you choose a "recovery plan" of multiple steps. There will be a "check
box" to click for each backup set to apply (starting with a full database
backup, etc).
"Jim" <brutjimmy@.yahoo.com> wrote in message
news:041201c38391$a1798c60$a401280a@.phx.gbl...
> Can anyone tell me if there is an easy way to restore
> multiple transaction logs at the same time as a group from
> within the Enterprise console? When I select multiple
> (consecutive) transaction logs to restore I receive a
> message about the transaction log not being part of "a
> multifamily(RAID) media set". There has to be an easier
> way.

Restoring Multiple Logs In One Operation

I am working with a client on backup strategy. Their database undergoes a
full backup 2-3 times a week with hourly log backups during business hours
(8:00-5:00). When doing a restore, there are 10-15 log files with a TRN
extension that I need to restore one by one. Is there a way to restore the
logs as a group?
--
Larry Menzin
American Techsystems Corp.Not without executing several RESTORE LOG commands. You can automate this by writing some code for
your client. Hard to give further recommendations as we don't know anything about how the backups
are taken, named etc. But using the backup history tables in msdb is what EM does.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Larry Menzin" <LarryMenzin@.discussions.microsoft.com> wrote in message
news:CDDCDA6F-255D-4069-99C5-C10637DDD4EE@.microsoft.com...
>I am working with a client on backup strategy. Their database undergoes a
> full backup 2-3 times a week with hourly log backups during business hours
> (8:00-5:00). When doing a restore, there are 10-15 log files with a TRN
> extension that I need to restore one by one. Is there a way to restore the
> logs as a group?
> --
> Larry Menzin
> American Techsystems Corp.

Restoring multiple backup files simultaneously to new server

Alright, here's the deal. I'm testing some backup/restore strategies, and hitting a (slight) sticking point.

We've got collections of database and log backups created by the usual maintenance plans on a 2000 Enterprise machine. I'm trying to run through a restore onto a new 2005 machine (Developer Edition on my test workstation) using the collection of .bak and .trn files copied from the 2000 server. When I try to restore to a new database on 2005 via SSMS, and select all the .bak and .trn files for the restore, I get the ol' "The volume on device '[trimmed]' is not part of a multiple family media set. BACKUP WITH FORMAT can be used to form a new media set." error.

I'm assuming this just means that SQL Server can't verify that these log backups are in fact part of a functional "set", even if they aren't part of a traditional backup media set. Is there any way to tell SSMS, "It's okay man, just restore the database from these files, in this order - trust me," or is the only solution restoring every individual log file one at a time? (Which seems to work fine, though is a tremendous pain with any more than a few log backups.) Seems like there ought to be a good one-shot method to restore a bunch of backups to a different server, and I'm just not finding it.Did you try just writing a script to do this? You can use xp_cmdshell to execute a dir which is piped to a file. You can then BCP that file into a table. Then iterate across the table restoring one file after another.|||Hmm, assuming there's no in-built functionality in SSMS to facilitate this, then I suppose that's the most straightforward solution (short of doing ugly tricks with backup info in msdb). This approach would depend on getting the transaction logs in the right order, but with the time stamps in the file names, and time stamps on the files themselves, I don't think this would be an issue. Thanks for the idea fuel. I can't shake the feeling that it can be made ever so slightly more elegant, though. ;-)|||

The problem is that when you specify multiple backup files for one restore operation, it looks to SSMS as if you are trying to give it a stripe set. It then does a standard check to make sure that all of the files you specify belong to the same stripe set and that there are no missing files. This of course fails because the files are not a stripe set, but sequential log backups.

So, probably your best bet for now is to create a script. Very high on my list of "backup/restore tools I'd like to build" is a wizard that understands what backup files are required to get database X restored to time Y, and does the right thing. (we should have all the info needed in the MSDB backup history tables as long as the backups haven't been moved) Unfortunately, I haven't managed to scrape together the needed resources yet.

Friday, March 9, 2012

Restoring from Multiple data sets

Running 2005, I have backups with multiple datasets on them.
But when I open the restore backup windows, it only shows me the last set.
How do I see the all the sets so I can choose which set to restore?
Thanks,
TomI don't understand the statement "backups with multiple datasets on them".
Do you mean you have made multiple backups to the same file? If so, did you
make sure you didn't INIT the file each time?
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"tshad" <tshad@.dslextreme.com> wrote in message
news:upzCs2uqIHA.3616@.TK2MSFTNGP06.phx.gbl...
> Running 2005, I have backups with multiple datasets on them.
> But when I open the restore backup windows, it only shows me the last set.
> How do I see the all the sets so I can choose which set to restore?
> Thanks,
> Tom
>

Wednesday, March 7, 2012

Restoring FileGroups...

We are currently in the process of designing a Consolidated System (data for multiple organizations stored in one database server) using SQL Server 2005. One of the design goal is to refresh individual company data from Production to "User Acceptance Server" without affecting other companies.

So, we thought of implementing this via Partitioning by Org. We need the "Proof of concept" that this can be done with a simple example. Since our DBA's are also new to SQL 2005, we are working together in achieving the goal. Ok, here is what we are trying to achieve:

1. Create a Database (Test1) with two file Groups (FileGroup1, FileGroup2) with two files in each file group.

2. Create two tables on this database, one on each FileGroup

Table 1 on FileGroup1, Table 2 on FileGroup2.

3. Populate data on these tables.

4. Create an exact copy of this database with the above architecture and call it Test2.

5. Populate Test2, with a different set of data.

6. Backup FileGroup2 on Test2 and restore it to Test1 database.

What we did:

1. I did the Full Database Backup of Test2 via SQL Management Studio. Also the log file with:I did the Full Database Backup of Test2 via SQL Management Studio. Also the log file using :

BACKUP LOG PGM TO DISK='D:\Test2_Log.bak' WITH NORECOVERY

2. Restore the database filegroup using:

RESTORE DATABASE Test1

FILEGROUP = 'FileGroup2'

FROM DISK ='D:\Test2.bak' WITH

MOVE 'FileName2_A' TO 'G:\Data\File2_A.ndf',

MOVE 'FileName2_B' TO 'G:\Data\File2_B.ndf'

3. . RESTORE LOG PGM

FROM DISK = 'D:\Test2_Log.bak'

WITH RECOVERY

After restoring the backup filegroup to Test1 database, I'm not able to query the table that resides in the filegroup that was restored. Error returned is:

Msg 8653, Level 16, State 1, Line 2

The query processor is unable to produce a plan for the table or view 'Junk2' because the table resides in a filegroup which is not online.

We know, we are missing some pieces while creating the backup or restoring that is causing this problem. If somebody can point it out, or provide us a "Working Example"...

Thanks for your help.

Take a full backup
BACKUP DATABASE _BIMAL_FG_TEST
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
GO

Or take File group backups
BACKUP DATABASE _BIMAL_FG_TEST
FILEGROUP = 'PRIMARY'
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FG_PRIMARY.bak'

BACKUP DATABASE _BIMAL_FG_TEST
FILEGROUP = 'FileGroup_A'
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FG_A.bak'

BACKUP DATABASE _BIMAL_FG_TEST
FILEGROUP = 'FileGroup_B'
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FG_B.bak'

You must take a log backup
BACKUP LOG _BIMAL_FG_TEST
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_LOG.TRN'

On the target database you have to first restore the Primary FG

RESTORE DATABASE _BIMAL_FG_TEST_COPY
FILEGROUP = 'PRIMARY'
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
WITH MOVE '_BIMAL_FG_TEST' to 'O:\SQLData\_BIMAL_FG_TEST_COPY.mdf'
, MOVE '_BIMAL_FG_TEST_log' to 'P:\SQLLog\_BIMAL_FG_TEST_COPY_log.LDF'
,STATS=20
,NORECOVERY, REPLACE, PARTIAL

Then Restore the FileGroup you want
RESTORE DATABASE _BIMAL_FG_TEST_COPY
FILEGROUP = 'FileGroup_A'
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
WITH MOVE 'BIMAL_FILEGROUP_TEST_1' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_1.ndf'
,MOVE 'BIMAL_FILEGROUP_TEST_2' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_2.ndf'
,STATS=20
,REPLACE
,NORECOVERY

And then restore the Log backup to complete the recovery
RESTORE LOG _BIMAL_FG_TEST_COPY
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_LOG.TRN'
WITH RECOVERY

And if you want to restore another file group just followt the last two steps.
RESTORE DATABASE _BIMAL_FG_TEST_COPY
FILEGROUP = 'FileGroup_B'
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
WITH MOVE 'BIMAL_FILEGROUP_TEST_3' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_3.ndf'
,MOVE 'BIMAL_FILEGROUP_TEST_4' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_4.ndf'
,STATS=20
,NORECOVERY

RESTORE LOG _BIMAL_FG_TEST_COPY
FROM DISK = 'OTest_Backups\_BIMAL_FG_TEST_LOG.TRN'
WITH RECOVERY

This may not solve your problem but this will give you a working example you can play to see whether what you need is possible or not.

Restoring FileGroups...

We are currently in the process of designing a Consolidated System (data for multiple organizations stored in one database server) using SQL Server 2005. One of the design goal is to refresh individual company data from Production to "User Acceptance Server" without affecting other companies.

So, we thought of implementing this via Partitioning by Org. We need the "Proof of concept" that this can be done with a simple example. Since our DBA's are also new to SQL 2005, we are working together in achieving the goal. Ok, here is what we are trying to achieve:

1. Create a Database (Test1) with two file Groups (FileGroup1, FileGroup2) with two files in each file group.

2. Create two tables on this database, one on each FileGroup

Table 1 on FileGroup1, Table 2 on FileGroup2.

3. Populate data on these tables.

4. Create an exact copy of this database with the above architecture and call it Test2.

5. Populate Test2, with a different set of data.

6. Backup FileGroup2 on Test2 and restore it to Test1 database.

What we did:

1. I did the Full Database Backup of Test2 via SQL Management Studio. Also the log file with:I did the Full Database Backup of Test2 via SQL Management Studio. Also the log file using :

BACKUP LOG PGM TO DISK='D:\Test2_Log.bak' WITH NORECOVERY

2. Restore the database filegroup using:

RESTORE DATABASE Test1

FILEGROUP = 'FileGroup2'

FROM DISK ='D:\Test2.bak' WITH

MOVE 'FileName2_A' TO 'G:\Data\File2_A.ndf',

MOVE 'FileName2_B' TO 'G:\Data\File2_B.ndf'

3. . RESTORE LOG PGM

FROM DISK = 'D:\Test2_Log.bak'

WITH RECOVERY

After restoring the backup filegroup to Test1 database, I'm not able to query the table that resides in the filegroup that was restored. Error returned is:

Msg 8653, Level 16, State 1, Line 2

The query processor is unable to produce a plan for the table or view 'Junk2' because the table resides in a filegroup which is not online.

We know, we are missing some pieces while creating the backup or restoring that is causing this problem. If somebody can point it out, or provide us a "Working Example"...

Thanks for your help.

Take a full backup
BACKUP DATABASE _BIMAL_FG_TEST
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
GO

Or take File group backups
BACKUP DATABASE _BIMAL_FG_TEST
FILEGROUP = 'PRIMARY'
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FG_PRIMARY.bak'

BACKUP DATABASE _BIMAL_FG_TEST
FILEGROUP = 'FileGroup_A'
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FG_A.bak'

BACKUP DATABASE _BIMAL_FG_TEST
FILEGROUP = 'FileGroup_B'
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FG_B.bak'

You must take a log backup
BACKUP LOG _BIMAL_FG_TEST
TO DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_LOG.TRN'

On the target database you have to first restore the Primary FG

RESTORE DATABASE _BIMAL_FG_TEST_COPY
FILEGROUP = 'PRIMARY'
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
WITH MOVE '_BIMAL_FG_TEST' to 'O:\SQLData\_BIMAL_FG_TEST_COPY.mdf'
, MOVE '_BIMAL_FG_TEST_log' to 'P:\SQLLog\_BIMAL_FG_TEST_COPY_log.LDF'
,STATS=20
,NORECOVERY, REPLACE, PARTIAL

Then Restore the FileGroup you want
RESTORE DATABASE _BIMAL_FG_TEST_COPY
FILEGROUP = 'FileGroup_A'
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
WITH MOVE 'BIMAL_FILEGROUP_TEST_1' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_1.ndf'
,MOVE 'BIMAL_FILEGROUP_TEST_2' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_2.ndf'
,STATS=20
,REPLACE
,NORECOVERY

And then restore the Log backup to complete the recovery
RESTORE LOG _BIMAL_FG_TEST_COPY
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_LOG.TRN'
WITH RECOVERY

And if you want to restore another file group just followt the last two steps.
RESTORE DATABASE _BIMAL_FG_TEST_COPY
FILEGROUP = 'FileGroup_B'
FROM DISK = 'O:\Test_Backups\_BIMAL_FG_TEST_FULL.bak'
WITH MOVE 'BIMAL_FILEGROUP_TEST_3' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_3.ndf'
,MOVE 'BIMAL_FILEGROUP_TEST_4' to 'O:\SQLData\BIMAL_FILEGROUP_TEST_COPY_4.ndf'
,STATS=20
,NORECOVERY

RESTORE LOG _BIMAL_FG_TEST_COPY
FROM DISK = 'OTest_Backups\_BIMAL_FG_TEST_LOG.TRN'
WITH RECOVERY

This may not solve your problem but this will give you a working example you can play to see whether what you need is possible or not.

Saturday, February 25, 2012

Restoring deleted records with out of row blobs?

I have a database that had a number of rows in multiple tables deleted -
fortunately there was very little activity after these deletions (mostly more
deletions a few days later).
I have the log files, but the original inserts just have the pointer to the
blob. I tried generating undo scripts with apexsql Log but it doesn't even
reference the blob column when the script generates (restoring from a
detached mdf does give the pointer, but that pointer doesn't go anywhere and
a select on it is blank/null).
How is one supposed to go about restoring records / rolling back a
transaction log for deleting a row with an out-of-row blob? Am I totally
screwed even though nothing else happened in the database aside from the
deletes before the backup was taken?
"Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in message
news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
>I have a database that had a number of rows in multiple tables deleted -
> fortunately there was very little activity after these deletions (mostly
> more
> deletions a few days later).
> I have the log files, but the original inserts just have the pointer to
> the
> blob. I tried generating undo scripts with apexsql Log but it doesn't
> even
> reference the blob column when the script generates (restoring from a
> detached mdf does give the pointer, but that pointer doesn't go anywhere
> and
> a select on it is blank/null).
> How is one supposed to go about restoring records / rolling back a
> transaction log for deleting a row with an out-of-row blob? Am I totally
> screwed even though nothing else happened in the database aside from the
> deletes before the backup was taken?
Use the last full backup and the subsequent log backups to do a
point-in-time restore to a new database. Then do INSERTS from the restored
database to your real database.
David
|||"David Browne" wrote:

> "Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in message
> news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
>
> Use the last full backup and the subsequent log backups to do a
> point-in-time restore to a new database. Then do INSERTS from the restored
> database to your real database.
> David
>
Unfortunately the database was created between full backups and the only
remaining full backup is after the fact. There's no way to roll back the
transactions from the ldf?
How are blobs handled in the transaction log? Is it just a limitation of
the log viewer that I'm using that I don't see the the anything about the
content of the blobs aside from the pointer in row?
|||"David Browne" wrote:

> "Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in message
> news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
>
> Use the last full backup and the subsequent log backups to do a
> point-in-time restore to a new database. Then do INSERTS from the restored
> database to your real database.
> David
>
So far it looks like
http://www.red-gate.com/products/SQL_Log_Rescue/index.htm will help me. My
frustrations seem to stem from a limitation with the other software I had
tried. I can see the correct data in the demo of log rescue... here's to
hoping.

Restoring deleted records with out of row blobs?

I have a database that had a number of rows in multiple tables deleted -
fortunately there was very little activity after these deletions (mostly more
deletions a few days later).
I have the log files, but the original inserts just have the pointer to the
blob. I tried generating undo scripts with apexsql Log but it doesn't even
reference the blob column when the script generates (restoring from a
detached mdf does give the pointer, but that pointer doesn't go anywhere and
a select on it is blank/null).
How is one supposed to go about restoring records / rolling back a
transaction log for deleting a row with an out-of-row blob? Am I totally
screwed even though nothing else happened in the database aside from the
deletes before the backup was taken?"Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in message
news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
>I have a database that had a number of rows in multiple tables deleted -
> fortunately there was very little activity after these deletions (mostly
> more
> deletions a few days later).
> I have the log files, but the original inserts just have the pointer to
> the
> blob. I tried generating undo scripts with apexsql Log but it doesn't
> even
> reference the blob column when the script generates (restoring from a
> detached mdf does give the pointer, but that pointer doesn't go anywhere
> and
> a select on it is blank/null).
> How is one supposed to go about restoring records / rolling back a
> transaction log for deleting a row with an out-of-row blob? Am I totally
> screwed even though nothing else happened in the database aside from the
> deletes before the backup was taken?
Use the last full backup and the subsequent log backups to do a
point-in-time restore to a new database. Then do INSERTS from the restored
database to your real database.
David|||"David Browne" wrote:
> "Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in message
> news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
> >I have a database that had a number of rows in multiple tables deleted -
> > fortunately there was very little activity after these deletions (mostly
> > more
> > deletions a few days later).
> >
> > I have the log files, but the original inserts just have the pointer to
> > the
> > blob. I tried generating undo scripts with apexsql Log but it doesn't
> > even
> > reference the blob column when the script generates (restoring from a
> > detached mdf does give the pointer, but that pointer doesn't go anywhere
> > and
> > a select on it is blank/null).
> >
> > How is one supposed to go about restoring records / rolling back a
> > transaction log for deleting a row with an out-of-row blob? Am I totally
> > screwed even though nothing else happened in the database aside from the
> > deletes before the backup was taken?
>
> Use the last full backup and the subsequent log backups to do a
> point-in-time restore to a new database. Then do INSERTS from the restored
> database to your real database.
> David
>
So far it looks like
http://www.red-gate.com/products/SQL_Log_Rescue/index.htm will help me. My
frustrations seem to stem from a limitation with the other software I had
tried. I can see the correct data in the demo of log rescue... here's to
hoping.

Restoring deleted records with out of row blobs?

I have a database that had a number of rows in multiple tables deleted -
fortunately there was very little activity after these deletions (mostly mor
e
deletions a few days later).
I have the log files, but the original inserts just have the pointer to the
blob. I tried generating undo scripts with apexsql Log but it doesn't even
reference the blob column when the script generates (restoring from a
detached mdf does give the pointer, but that pointer doesn't go anywhere and
a select on it is blank/null).
How is one supposed to go about restoring records / rolling back a
transaction log for deleting a row with an out-of-row blob? Am I totally
screwed even though nothing else happened in the database aside from the
deletes before the backup was taken?"Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in message
news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
>I have a database that had a number of rows in multiple tables deleted -
> fortunately there was very little activity after these deletions (mostly
> more
> deletions a few days later).
> I have the log files, but the original inserts just have the pointer to
> the
> blob. I tried generating undo scripts with apexsql Log but it doesn't
> even
> reference the blob column when the script generates (restoring from a
> detached mdf does give the pointer, but that pointer doesn't go anywhere
> and
> a select on it is blank/null).
> How is one supposed to go about restoring records / rolling back a
> transaction log for deleting a row with an out-of-row blob? Am I totally
> screwed even though nothing else happened in the database aside from the
> deletes before the backup was taken?
Use the last full backup and the subsequent log backups to do a
point-in-time restore to a new database. Then do INSERTS from the restored
database to your real database.
David|||"David Browne" wrote:

> "Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in messag
e
> news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
>
> Use the last full backup and the subsequent log backups to do a
> point-in-time restore to a new database. Then do INSERTS from the restore
d
> database to your real database.
> David
>
Unfortunately the database was created between full backups and the only
remaining full backup is after the fact. There's no way to roll back the
transactions from the ldf?
How are blobs handled in the transaction log? Is it just a limitation of
the log viewer that I'm using that I don't see the the anything about the
content of the blobs aside from the pointer in row?|||"David Browne" wrote:

> "Andrew Meinert" <AndrewMeinert@.discussions.microsoft.com> wrote in messag
e
> news:5CE119C0-80AE-4A16-A452-799A41971BC5@.microsoft.com...
>
> Use the last full backup and the subsequent log backups to do a
> point-in-time restore to a new database. Then do INSERTS from the restore
d
> database to your real database.
> David
>
So far it looks like
http://www.red-gate.com/products/SQ...escue/index.htm will help me. My
frustrations seem to stem from a limitation with the other software I had
tried. I can see the correct data in the demo of log rescue... here's to
hoping.