there are 3 diffeent pools on NetworkerV 7.2(spume).
Level1
default (incr)
full
there are 20 tapes nd 2 drives. One drive is broken.
15 tapes are dedicated to incr recycled and only replaced once a year.
4 for level1 pool there are 8 total and they get rotated weekly?
1 for full backups. (some reason networker dumps some things to full).
Rey trades tapes for us and we trade incr to him.
We have about 2.5 terabytes of data right now, and it is growing.
To run the gui:
spume:/usr/bin/nsr/ndadmin & (use the X11 terminal) ssh -Y admin@spume
Stuart will change the permissions on networker as of now we can't make
changes for me.
There is a volume script written to see current volumes.
/tools/admin/scripts/jb-info (shows all tapes in box)
/tools/admin/scripts/jb-info Level1|Full|Default
Website for the schedule.
http://gssg/gssg/backups/spume-full-schedule.html
http://gssg.stanford.edu/gssg/backups/legatorecover.html
http://gssg.stanford.edu/gssg/backups/indexes/spume/
don't know user and passwd
gssg - nv2************
To restore:
cd /to/the/dir
/usr/bin/nsr/recover -s spume
/usr/bin/recover -s spume
ls
## if file is there the just do ##
add file
>recover
if you need to scan the tape in, then you need the id #
and run:
>/usr/sbin/nsr/scanner -i -S 1917061200 -c malt /dev/rmt/1cbn
> where:
> -i rebuilds the indexes
> -S designates the save set ID (ssid) of the data we want indexed
> -c designates the client hostname
>
> You can get the SSID from the same web page that tells you which filesystem is backed up on which volume.
In the case of 20110101SpumeFull06, you'll find the following line:
>
> malt 01/29/11 04:30:08 30 GB 1917061200 h full /data
### EXAMPLE #### for conidium /data id number 868154195
bash-2.05# /usr/sbin/nsr/scanner -i -S 868154195 -c conidium /dev/rmt/0cbn
scanner: scanning sdlt320 tape 20110401SpumeFull11 on /dev/rmt/0cbn
scanner: sdlt320 tape 20110401SpumeFull11 already exists in the media index
scanner: ssid 817822550: SYNCHRONIZED at 3193 MB, 131833 file(s)
scanner: ssid 784268392: SYNCHRONIZED at 2676 MB, 10612 file(s)
scanner: ssid 868154195: SYNCHRONIZED at 3519 MB, 29133 file(s)
bash-2.05# /usr/sbin/nsr/scanner -i -S 868154195 -c conidium /dev/rmt/0cbn
scanner: scanning sdlt320 tape 20110401SpumeFull11 on /dev/rmt/0cbn
scanner: sdlt320 tape 20110401SpumeFull11 already exists in the media index
scanner: ssid 817822550: SYNCHRONIZED at 3193 MB, 131833 file(s)
scanner: ssid 784268392: SYNCHRONIZED at 2676 MB, 10612 file(s)
scanner: ssid 868154195: SYNCHRONIZED at 3519 MB, 29133 file(s)
client name save set save time level size files ssid S
conidium /data 5/02/11 11:38 f 3373185984 27885 868154195 S
conidium / 5/02/11 11:43 f 3254179764 130968 817822550 S
conidium /db 5/02/11 11:48 f 2687061852 10438 784268392 S
## when the first tape is done -- do this with defaults ####
scanner: when next volume is ready, enter device name (or `q' to quit) [/dev/rmt/0cbn]? scanner: starting file
number (or `q' to quit) [2]?
scanner: starting record number (or `q' to quit) [0]?
scanner: continuing scan with tape on device `/dev/rmt/0cbn'
scanner: scanning sdlt320 tape 20110401SpumeFull11 on /dev/rmt/0cbn
scanner: sdlt320 tape 20110401SpumeFull11 already exists in the media index
scanner: ssid 817822550: SYNCHRONIZED at 3192 MB, 131828 file(s)
scanner: ssid 784268392: SYNCHRONIZED at 2653 MB, 10608 file(s)
scanner: ssid 868154195: SYNCHRONIZED at 3473 MB, 29128 file(s)
Date File to restore File to create with correct timestamp
2010-03-06 /data/share/ftp/pub/yeast/data_download/chromosomal_feature/saccharomyces_cerevisiae.gff /data/share/ftp/pub/yeast/data_download/chromosomal_feature/archive/saccharomyces_cerevisiae.gff.20100306.gz
2010-04-03 /data/share/ftp/pub/yeast/data_download/chromosomal_feature/saccharomyces_cerevisiae.gff /data/share/ftp/pub/yeast/data_download/chromosomal_feature/archive/saccharomyces_cerevisiae.gff.20100403.gz
2010-05-01 /data/share/ftp/pub/yeast/data_download/chromosomal_feature/saccharomyces_cerevisiae.gff /data/share/ftp/pub/yeast/data_download/chromosomal_feature/archive/saccharomyces_cerevisiae.gff.20100501.gz
2010-06-05 /data/share/ftp/pub/yeast/data_download/chromosomal_feature/saccharomyces_cerevisiae.gff /data/share/ftp/pub/yeast/data_download/chromosomal_feature/archive/saccharomyces_cerevisiae.gff.20100605.gz
2010-07-03 /data/share/ftp/pub/yeast/data_download/chromosomal_feature/saccharomyces_cerevisiae.gff /data/share/ftp/pub/yeast/data_download/chromosomal_feature/archive/saccharomyces_cerevisiae.gff.20100703.gz
<-- for fulls -->
tapes --> load ---> label ---> customize ---> schedule
replace tapes 1-5 on the left side. let them run over weekend.
Check monday if need new tape.
Before you take out 20110701SpumeFull19, we want to set this volume to read-only within NetWorker so it doesn'
t think this volume is available
for writing if we should run out of usable tapes in the Full pool in the future. To do this, from within nwa
dmin, click on the Volumes button
and select 20110701SpumeFull19. Select Volume -> Change Mode -> Read Only.
I hadn't mentioned this before, but when you take tapes out of the library for storage, we generally write pro
tect these tapes. To write protect them,
find the tab next to the label and slide it so that the orange bar is showing. When the bar is visible, that
means the tape is write-protected.
Finally, to label a new Full tape with the 20110801 prefix, go back into nwadmin. Select Customize -> Label T
emplates. Inside the "Label Templates:"
pane of the window, select "Full". Go down in the window to the "Fields:" section. There are three "fields
" that make up the name of the tape volume
-- select the one that says "20110701". (It may already be selected by default.) This will make "20110701" s
how up in the editable portion of the "Fields:" section.
Change "20110701" to "20110801" and click the "change" button. This will change the default template for th
e Full pool when you label new tapes.
Toward the bottom, if there is text in the "Next:" field, delete whatever is in that box. Click "Apply" at t
he bottom of the window.
The "Next:" field should now read "20110801SpumeFull01".
#######################
to add a client to backups.
1. Add group. 2. Add schedule. 3. Add client. 4. Add client to full and level 1 backup pools.
all under customize!! except for pools, it's under Media/pools
then add the client license.
go to starter:/data/kickstart/networker
scp lgtoclnt-7.2-1.i686.rpm to client and run rpm -i lgtoclnt-7.2-1.i686.rpm
then add the client to the database,
bash-3.2# /sbin/chkconfig --add networker
bash-3.2# /sbin/chkconfig networker on
bash-3.2#/etc/init.d/networker start
#for RHEL6:
Here are the results from my notes about installing the Legato client
on a RHEL6 machine...
* Get the ncurses-libs-5.7-3.20090208.el6.i686.rpm file from starter:/
share/kickstart/rpms.rhel61 and install using "rpm -i".
Run the following yum commands:
sudo yum install glibc.i686
sudo yum install libICE.i686
sudo yum install libSM.i686
sudo yum install libX11.i686
sudo yum install libXext.i686
sudo yum install libXt.i686
You may be able to combine the yum commands...
Once you install those, I think you should be able to install the
Legato client RPM. Let me know if you have any problems with that...
bash-3.2# rpm -i lgtoclnt-7.2-1.i686.rpm
Directory /nsr, does not exist.
Creating directory /nsr.
nsr-izing system files
nsr-izing system files
Initializing //nsr/res/servers
Completing Installation
NetWorker successfully installed on `hypha-new.Stanford.EDU'!
bash-3.2# ./networker
usage: networker {start|stop}
bash-3.2# ./networker start
bash-3.2# ps -ef | grep nsr
root 29236 1 0 15:59 ? 00:00:00 /usr/sbin/nsrexecd
root 29238 29236 0 15:59 ? 00:00:00 /usr/sbin/nsrexecd
root 29240 4734 0 15:59 pts/7 00:00:00 grep nsr
## TO FREE UP A LICENSE FOR A CLIENT
nwadmin, go to Clients -> Client Setup... select the client and click on Delete.
This should work.