c3d2-wiki/Server%2FFlatbert%2Frestore16022015.mw

545 lines
32 KiB
Plaintext
Raw Normal View History

(alter flatbert auf loudhoward hardware)
== Alter Zustand ==
flatbert hatte sda mit 80 GB und sdb mit 250 GB, softraid1 md0 (1GB) für boot und md1 (ca. 65GB) für / (ROOT) filesystem, da die lxc container nicht aufs md1 passten wurde ein zfs singlepool storage für die container erstellt (zfsonlinux 0.6.3 / (spl/zfs kernelmodul git version)
gedacht waren regelmäßige backup-snapshots auf freenas(freebert)
... hatte aber keiner eingerichtet ...
== Was passiert ist ==
nach 51 Tagen uptime, (bis 15.02.2015) ata fehler, zpool konnte nicht mehr import werden
== bisheriger Vorgang ==
250 GB platte als Image mit ddrescue (auf zaubert) gesichert:
Fehler: 8 bereiche ca 1.5 MB
Log:
2015-02-16 18:45:48 +01:00
<source lang="bash">
root@zaubert:/# cat flatbert.log
# Rescue Logfile. Created by GNU ddrescue version 1.19
# Command line: ddrescue -n /dev/sda /flatbert_backup_16.02.2015.img flatbert.log
# Start time: 2015-02-16 01:09:36
# Current time: 2015-02-16 03:54:17
# Finished
# current_pos current_status
0x17E0A4E000 +
# pos size status
0x00000000 0x4D20F000 +
0x4D20F000 0x00000200 -
0x4D20F200 0x00000C00 /
0x4D20FE00 0x00000200 -
0x4D210000 0x0012F000 +
0x4D33F000 0x00000200 -
0x4D33F200 0x00000C00 /
0x4D33FE00 0x00000200 -
0x4D340000 0x038A5000 +
0x50BE5000 0x00000200 -
0x50BE5200 0x006FFE00 +
0x512E5000 0x00000200 -
0x512E5200 0x00000C00 /
0x512E5E00 0x00000200 -
0x512E6000 0x38F252000 +
0x3E0538000 0x00000200 -
0x3E0538200 0x00000C00 /
0x3E0538E00 0x00000200 -
0x3E0539000 0x648203000 +
0xA2873C000 0x00000200 -
0xA2873C200 0x00000C00 /
0xA2873CE00 0x00000200 -
0xA2873D000 0xD866A9000 +
0x17AEDE6000 0x00000200 -
0x17AEDE6200 0x00000C00 /
0x17AEDE6E00 0x00000200 -
0x17AEDE7000 0x31C66000 +
0x17E0A4D000 0x00000200 -
0x17E0A4D200 0x00000C00 /
0x17E0A4DE00 0x00000200 -
0x17E0A4E000 0x22580E0000 +
2015-02-16 18:45:48 +01:00
</source>
IMAGE liegt derzeit auf Zaubert unter:
2015-02-16 18:45:48 +01:00
<source lang="bash">
root@zaubert:/ROOT/default# ls -all | grep flatbert
-rw-r--r-- 1 root root 250059350016 Feb 16 03:54 flatbert_backup_16.02.2015.img
-rw-r--r-- 1 root root 1135 Feb 16 03:54 flatbert.log
2015-02-16 18:45:48 +01:00
</source>
== knusbert backup ==
als hardware wurde knusbert genommen
ein zfs snapshot backup von dem freebsd für cryptostorage, exolastic, rsyslog, asterisk liegt auf freenas
2015-02-16 18:45:48 +01:00
<source lang="bash">
zroot/BACKUP/knusbert 4.90G 192G 140K /mnt/BACKUP/knusbert
zroot/BACKUP/knusbert-cpool 180G 192G 140K /mnt/BACKUP/knusbert-cpool
zroot/BACKUP/knusbert-cpool/cpool 180G 192G 140K /mnt/BACKUP/knusbert-cpool/cpool
zroot/BACKUP/knusbert-cpool/cpool/BACKUP 86.9G 192G 61.4G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot 25.5G 192G 140K /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail 25.5G 192G 174K /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail-admin 140K 192G 140K /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail-admin
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/avahi.hq.c3d2.de 427M 192G 427M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/avahi.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/basejail 1.14G 192G 1.14G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/basejail
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/beastbert.hq.c3d2.de 211M 192G 211M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/beastbert.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/bitcoin.hq.c3d2.de 168M 192G 168M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/bitcoin.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/couchdb1.hq.c3d2.de 1.32G 192G 1.32G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/couchdb1.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/couchdb2.hq.c3d2.de 613M 192G 613M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/couchdb2.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/couchdb3.hq.c3d2.de 608M 192G 608M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/couchdb3.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dhcp.hq.c3d2.de 941M 192G 941M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dhcp.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dilbert.hq.c3d2.de 544M 192G 544M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dilbert.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc1.hq.c3d2.de 428M 192G 428M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc1.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc2.hq.c3d2.de 428M 192G 428M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc2.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc3.hq.c3d2.de 428M 192G 428M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc3.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc4.hq.c3d2.de 428M 192G 428M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/distcc4.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dn42.hq.c3d2.de 188M 192G 188M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dn42.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dnscache.hq.c3d2.de 209M 192G 209M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dnscache.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dnstunnel.hq.c3d2.de 134M 192G 134M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/dnstunnel.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/downpressor.hq.c3d2.de 568M 192G 568M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/downpressor.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/fulljail 140K 192G 140K /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/fulljail
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/gitbert.hq.c3d2.de 185M 192G 185M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/gitbert.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/haproxy.hq.c3d2.de 93.4M 192G 93.4M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/haproxy.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/ipredator.hq.c3d2.de 190M 192G 190M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/ipredator.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/listbert1.hq.c3d2.de 2.95G 192G 2.95G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/listbert1.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/listbert2.hq.c3d2.de 185M 192G 185M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/listbert2.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/music.hq.c3d2.de 1.47G 192G 1.47G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/music.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/newjail 2.90M 192G 2.90M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/newjail
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/pentabot.hq.c3d2.de 499M 192G 499M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/pentabot.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/printer.hq.c3d2.de 58.5M 192G 58.5M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/printer.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/privoxy.hq.c3d2.de 419M 192G 419M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/privoxy.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/public-ip.hq.c3d2.de 166M 192G 166M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/public-ip.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/reverseproxy1.hq.c3d2.de 242M 192G 242M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/reverseproxy1.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/reverseproxy2.hq.c3d2.de 224M 192G 224M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/reverseproxy2.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/rippen.hq.c3d2.de 1010M 192G 1010M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/rippen.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/saugbert.hq.c3d2.de 1.70G 192G 1.70G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/saugbert.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/sniffer.hq.c3d2.de 321M 192G 321M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/sniffer.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/squid.hq.c3d2.de 2.10G 192G 2.10G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/squid.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/storage.hq.c3d2.de 2.79G 192G 2.71G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/storage.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/storage.hq.c3d2.de/samba4db 79.8M 192G 79.8M -
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/storage.hq.c3d2.de/zimport 174K 192G 174K /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/storage.hq.c3d2.de/zimport
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/tor.hq.c3d2.de 216M 192G 216M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/tor.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/vert.hq.c3d2.de 358M 192G 358M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/vert.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/watchbert.hq.c3d2.de 1.31G 192G 1.31G /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/watchbert.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/wire.hq.c3d2.de 528M 192G 528M /mnt/BACKUP/knusbert-cpool/cpool/BACKUP/zroot/ezjail/wire.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/ezjail 65.6G 192G 186K /mnt/BACKUP/knusbert-cpool/cpool/ezjail
zroot/BACKUP/knusbert-cpool/cpool/ezjail/asterisk.hq.c3d2.de 1.11G 192G 1.11G /mnt/BACKUP/knusbert-cpool/cpool/ezjail/asterisk.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/ezjail/basejail 1.10G 192G 1.10G /mnt/BACKUP/knusbert-cpool/cpool/ezjail/basejail
zroot/BACKUP/knusbert-cpool/cpool/ezjail/cryptostorage.hq.c3d2.de 58.2G 192G 589M /mnt/BACKUP/knusbert-cpool/cpool/ezjail/cryptostorage.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/ezjail/cryptostorage.hq.c3d2.de/samba4db 15.4M 192G 15.4M -
zroot/BACKUP/knusbert-cpool/cpool/ezjail/cryptostorage.hq.c3d2.de/storage 57.6G 192G 57.6G /mnt/BACKUP/knusbert-cpool/cpool/ezjail/cryptostorage.hq.c3d2.de/storage
zroot/BACKUP/knusbert-cpool/cpool/ezjail/dn42-freeland.hq.c3d2.de 385M 192G 385M /mnt/BACKUP/knusbert-cpool/cpool/ezjail/dn42-freeland.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/ezjail/exolastic.hq.c3d2.de 4.09G 192G 4.09G /mnt/BACKUP/knusbert-cpool/cpool/ezjail/exolastic.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/ezjail/freebot.hq.c3d2.de 526M 192G 526M /mnt/BACKUP/knusbert-cpool/cpool/ezjail/freebot.hq.c3d2.de
zroot/BACKUP/knusbert-cpool/cpool/ezjail/newjail 3.12M 192G 3.12M /mnt/BACKUP/knusbert-cpool/cpool/ezjail/newjail
zroot/BACKUP/knusbert-cpool/cpool/ezjail/syslog.hq.c3d2.de 240M 192G 240M /mnt/BACKUP/knusbert-cpool/cpool/ezjail/syslog.hq.c3d2.de
zroot/BACKUP/knusbert/singlestorage 140K 192G 140K /mnt/BACKUP/knusbert/singlestorage
zroot/BACKUP/knusbert/zroot 4.90G 192G 151K /mnt/BACKUP/knusbert/zroot
zroot/BACKUP/knusbert/zroot/ROOT 1.04G 192G 140K /mnt/BACKUP/knusbert/zroot/ROOT
zroot/BACKUP/knusbert/zroot/ROOT/default 1.04G 192G 1.04G /mnt/BACKUP/knusbert/zroot/ROOT/default
zroot/BACKUP/knusbert/zroot/admin 186K 192G 186K /mnt/BACKUP/knusbert/zroot/admin
zroot/BACKUP/knusbert/zroot/tmp 192K 192G 192K /mnt/BACKUP/knusbert/zroot/tmp
zroot/BACKUP/knusbert/zroot/usr 3.86G 192G 140K /mnt/BACKUP/knusbert/zroot/usr
zroot/BACKUP/knusbert/zroot/usr/home 198K 192G 198K /mnt/BACKUP/knusbert/zroot/usr/home
zroot/BACKUP/knusbert/zroot/usr/obj 1.69G 192G 1.69G /mnt/BACKUP/knusbert/zroot/usr/obj
zroot/BACKUP/knusbert/zroot/usr/ports 931M 192G 931M /mnt/BACKUP/knusbert/zroot/usr/ports
zroot/BACKUP/knusbert/zroot/usr/src 1.26G 192G 1.26G /mnt/BACKUP/knusbert/zroot/usr/src
zroot/BACKUP/knusbert/zroot/var 942K 192G 140K /mnt/BACKUP/knusbert/zroot/var
zroot/BACKUP/knusbert/zroot/var/crash 140K 192G 140K /mnt/BACKUP/knusbert/zroot/var/crash
zroot/BACKUP/knusbert/zroot/var/log 279K 192G 279K /mnt/BACKUP/knusbert/zroot/var/log
zroot/BACKUP/knusbert/zroot/var/mail 151K 192G 151K /mnt/BACKUP/knusbert/zroot/var/mail
zroot/BACKUP/knusbert/zroot/var/tmp 140K 192G 140K /mnt/BACKUP/knusbert/zroot/var/tmp
2015-02-16 18:45:48 +01:00
</source>
== neuer flatbert ==
knusbert hardware:
sda - IDE (200 GB)
sdb - SATA (1 TB)
sdc - SATA (1 TB)
sdd - SATA (2 TB)
das flatbert IMAGE wurde mit netcat auf /dev/sdb geschoben, alle alten kernel (in einer live usb-stick chroot umgebung) entfernt, kernel 3.16 (jessie) installiert und softraid1 für md0/md1 mit /dev/sdb und /dev/sdc erstellt
die zfs singlepool partition liegt derzeit auf /dev/sdb
kompiliert wurde zfsonlinux version 0.6.3-26 (spl 17/zfs 26)
2015-02-16 18:47:29 +01:00
=== restore phase 1 ===
der zpool scheint kaputt gegangen zu sein:
2015-02-16 18:45:48 +01:00
<source lang="bash">
[root@flatbert:~]# zpool status -v
pool: singlepool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: none requested
config:
NAME STATE READ WRITE CKSUM
singlepool ONLINE 0 0 29
sdb4 ONLINE 0 0 116
errors: Permanent errors have been detected in the following files:
singlepool/rpool/disk6:/var/log/daemon.log
singlepool/rpool/disk6:/var/log/syslog.1
singlepool/rpool/disk12:/var/spool/exim4/input/1YMghG-0001t5-E8-H
singlepool/rpool/disk19:/home/admin/.ghc/x86_64-linux-7.8.4/package.conf.d/package.cache
singlepool/rpool/disk19:<0xf84c>
singlepool/rpool/disk19:<0xf84d>
singlepool/rpool/disk19:/home/admin/.cabal/logs
singlepool/rpool/disk19:/broken_dir
singlepool/rpool/disk24:<0x0>
[root@flatbert:~]#
2015-02-16 18:45:48 +01:00
</source>
aktuelle datasets:
2015-02-16 18:45:48 +01:00
<source lang="bash">
[root@flatbert:~]# zfs list
cannot iterate filesystems: I/O error
NAME USED AVAIL REFER MOUNTPOINT
singlepool 77,4G 78,1G 25K /singlepool
singlepool/rpool 77,4G 78,1G 3,06G /rpool
singlepool/rpool/disk10 1,67G 78,1G 1,61G /rpool/disk10
singlepool/rpool/disk11 341M 78,1G 341M /rpool/disk11
singlepool/rpool/disk13 457M 78,1G 457M /rpool/disk13
singlepool/rpool/disk14 1005M 78,1G 960M /rpool/disk14
singlepool/rpool/disk17 422M 78,1G 422M /rpool/disk17
singlepool/rpool/disk19 1,70G 78,1G 1,16G /rpool/disk19
singlepool/rpool/disk2 1,72G 78,1G 1,67G /rpool/disk2
singlepool/rpool/disk21 1,23G 78,1G 1,23G /rpool/disk21
singlepool/rpool/disk22 148M 78,1G 148M /rpool/disk22
singlepool/rpool/disk23 685M 78,1G 664M /rpool/disk23
singlepool/rpool/disk25 3,20G 78,1G 3,20G /rpool/disk25
singlepool/rpool/disk26 402M 78,1G 331M /rpool/disk26
singlepool/rpool/disk28 57K 78,1G 25K /rpool/disk28
singlepool/rpool/disk29 57K 78,1G 25K /rpool/disk29
singlepool/rpool/disk3 624M 78,1G 368M /rpool/disk3
singlepool/rpool/disk30 57K 78,1G 25K /rpool/disk30
singlepool/rpool/disk4 761M 78,1G 761M /rpool/disk4
singlepool/rpool/disk7 398M 78,1G 364M /rpool/disk7
singlepool/rpool/disk8 1,44G 78,1G 1,18G /rpool/disk8
singlepool/rpool/disk9 1,06G 78,1G 1,05G /rpool/disk9
singlepool/rpool/gitlab 744M 78,1G 741M /rpool/gitlab
singlepool/rpool/gitolite 185M 78,1G 183M /rpool/gitolite
[root@flatbert:~]#
2015-02-16 18:45:48 +01:00
</source>
ein backup wurde erstmal vom alten snapshot stand _BACKUP_24.12.2014 auf freenas geschoben
eigentliche lxc-mount-struktur:
2015-02-16 18:45:48 +01:00
<source lang="bash">
[root@flatbert:/lxc-container]# ls -al
insgesamt 8
drwxr-xr-x 2 root root 4096 Okt 6 14:50 .
drwxr-xr-x 37 root root 4096 Feb 16 08:18 ..
lrwxrwxrwx 1 root root 12 Mär 31 2014 astrom -> /rpool/disk1
lrwxrwxrwx 1 root root 12 Mär 31 2014 astron -> /rpool/disk2
lrwxrwxrwx 1 root root 12 Mär 31 2014 blackhole -> /rpool/disk3
lrwxrwxrwx 1 root root 12 Mär 31 2014 cloudybay -> /rpool/disk4
lrwxrwxrwx 1 root root 12 Mär 31 2014 debcache -> /rpool/disk5
lrwxrwxrwx 1 root root 12 Mär 31 2014 dhcp -> /rpool/disk7
lrwxrwxrwx 1 root root 14 Jun 3 2014 distcc5 -> /rpool/distcc5
lrwxrwxrwx 1 root root 14 Jun 7 2014 distcc6 -> /rpool/distcc6
lrwxrwxrwx 1 root root 12 Mär 31 2014 dn42 -> /rpool/disk6
lrwxrwxrwx 1 root root 12 Mär 31 2014 drucker -> /rpool/disk8
lrwxrwxrwx 1 root root 12 Mär 31 2014 feile -> /rpool/disk9
lrwxrwxrwx 1 root root 13 Mär 31 2014 fernandopoo -> /rpool/disk10
lrwxrwxrwx 1 root root 13 Mär 31 2014 flatbert-extra-backups -> /rpool/disk11
lrwxrwxrwx 1 root root 13 Mär 31 2014 git -> /rpool/disk12
lrwxrwxrwx 1 root root 13 Okt 6 14:44 gitlab -> /rpool/gitlab
lrwxrwxrwx 1 root root 15 Okt 6 14:44 gitolite -> /rpool/gitolite
lrwxrwxrwx 1 root root 13 Mär 31 2014 global -> /rpool/disk13
lrwxrwxrwx 1 root root 13 Mär 31 2014 jabber1 -> /rpool/disk14
lrwxrwxrwx 1 root root 13 Mär 31 2014 jabber2 -> /rpool/disk15
lrwxrwxrwx 1 root root 13 Mär 31 2014 knot -> /rpool/disk16
lrwxrwxrwx 1 root root 13 Mär 31 2014 leviathan -> /rpool/disk17
lrwxrwxrwx 1 root root 13 Mär 31 2014 lxc-cache -> /rpool/disk18
lrwxrwxrwx 1 root root 13 Mär 31 2014 matemat -> /rpool/disk19
lrwxrwxrwx 1 root root 13 Mär 31 2014 semanta -> /rpool/disk20
lrwxrwxrwx 1 root root 14 Okt 1 00:18 sharing -> /rpool/sharing
lrwxrwxrwx 1 root root 13 Okt 1 00:18 sharing.old -> /rpool/disk21
lrwxrwxrwx 1 root root 13 Mär 31 2014 thron -> /rpool/disk23
lrwxrwxrwx 1 root root 13 Mär 31 2014 thron2 -> /rpool/disk22
lrwxrwxrwx 1 root root 13 Mär 31 2014 wiefelspuetz -> /rpool/disk24
lrwxrwxrwx 1 root root 13 Mär 31 2014 wolke7 -> /rpool/disk25
lrwxrwxrwx 1 root root 13 Mär 31 2014 wormhole -> /rpool/disk26
lrwxrwxrwx 1 root root 13 Mär 31 2014 www1 -> /rpool/disk27
2015-02-16 18:45:48 +01:00
</source>
2015-02-16 18:47:29 +01:00
=== restore phase 2 ===
2015-02-16 18:49:58 +01:00
2015-02-16 19:03:35 +01:00
<source lang="bash">
astrom -> /rpool/disk1 ### FEHLT !!! ###
astron -> /rpool/disk2
blackhole -> /rpool/disk3
cloudybay -> /rpool/disk4
debcache -> /rpool/disk5 ### FEHLT !!! ###
dhcp -> /rpool/disk7
distcc5 -> /rpool/distcc5
distcc6 -> /rpool/distcc6
dn42 -> /rpool/disk6 ### FEHLT !!! ###
drucker -> /rpool/disk8
feile -> /rpool/disk9
fernandopoo -> /rpool/disk10
flatbert-extra-backups -> /rpool/disk11
git -> /rpool/disk12 ### FEHLT !!! ###
gitlab -> /rpool/gitlab
gitolite -> /rpool/gitolite
global -> /rpool/disk13
jabber1 -> /rpool/disk14
jabber2 -> /rpool/disk15 ### FEHLT !!! ###
knot -> /rpool/disk16 ### FEHLT !!! ###
leviathan -> /rpool/disk17
lxc-cache -> /rpool/disk18 ### FEHLT !!! ###
matemat -> /rpool/disk19
semanta -> /rpool/disk20 ### FEHLT !!! ###
sharing -> /rpool/sharing
sharing.old -> /rpool/disk21
thron -> /rpool/disk23
thron2 -> /rpool/disk22
wiefelspuetz -> /rpool/disk24 ### FEHLT !!! ###
wolke7 -> /rpool/disk25
wormhole -> /rpool/disk26
www1 -> /rpool/disk27 ### FEHLT !!! ###
</source>
2015-02-16 18:49:58 +01:00
... in arbeit ...
2015-02-16 19:08:23 +01:00
==== zfs debug ====
2015-02-16 19:40:59 +01:00
http://docs.oracle.com/cd/E18752_01/html/819-5461/gbbwl.html
2015-02-16 19:08:23 +01:00
<source lang="bash">
[root@flatbert:~]# zdb
singlepool:
version: 5000
name: 'singlepool'
state: 0
txg: 2268294
pool_guid: 11752217384613672320
errata: 0
hostid: 380373859
hostname: 'flatbert'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 11752217384613672320
children[0]:
type: 'disk'
id: 0
guid: 311963733056151185
path: '/dev/sdb4'
whole_disk: 0
metaslab_array: 34
metaslab_shift: 30
ashift: 9
asize: 169995141120
is_log: 0
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
</source>
2015-02-16 19:40:59 +01:00
<source lang="bash">
[root@flatbert:~]# zpool history
History for 'singlepool':
2014-10-06.12:22:26 zpool create singlepool /dev/sda4
2014-10-06.12:23:15 zfs set checksum=fletcher4 singlepool
2014-10-06.12:23:24 zfs set compression=lz4 singlepool
2014-10-06.12:24:16 zfs create -o checksum=fletcher4 -o compression=lz4 -o mountpoint=/singlepool/rpool singlepool/rpool
2014-10-06.12:24:56 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk1
2014-10-06.12:24:57 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk2
2014-10-06.12:24:58 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk3
2014-10-06.12:24:59 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk4
2014-10-06.12:25:00 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk5
2014-10-06.12:25:04 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk6
2014-10-06.12:25:06 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk7
2014-10-06.12:25:07 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk8
2014-10-06.12:25:10 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk9
2014-10-06.12:25:11 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk10
2014-10-06.12:25:13 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk11
2014-10-06.12:25:14 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk12
2014-10-06.12:25:15 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk13
2014-10-06.12:25:17 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk14
2014-10-06.12:25:18 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk15
2014-10-06.12:25:19 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk16
2014-10-06.12:25:20 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk17
2014-10-06.12:25:22 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk18
2014-10-06.12:25:24 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk19
2014-10-06.12:25:25 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk20
2014-10-06.12:25:26 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk21
2014-10-06.12:25:27 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk22
2014-10-06.12:25:28 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk23
2014-10-06.12:25:30 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk24
2014-10-06.12:25:31 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk25
2014-10-06.12:25:32 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk26
2014-10-06.12:25:33 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk27
2014-10-06.12:25:35 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk28
2014-10-06.12:25:37 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk29
2014-10-06.12:25:43 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/disk30
2014-10-06.12:26:45 zfs snapshot -r singlepool@_0000_clean_06.10.2014
2014-10-06.14:34:26 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/gitlab
2014-10-06.14:34:48 zfs create -o checksum=fletcher4 -o compression=lz4 singlepool/rpool/gitolite
2014-10-06.14:47:10 zfs snapshot -r singlepool@_0001_RECOVERY_06.10.2014
2014-10-06.14:49:27 zfs set mountpoint=/rpool singlepool/rpool
2014-10-06.15:01:18 zpool export singlepool
2014-10-12.15:19:56 zfs snapshot -r singlepool@_0002_RUN_12.10.2014
2014-12-24.12:17:03 zfs snapshot -r singlepool@_BACKUP_24.12.2014
2014-12-26.04:26:21 zfs snapshot -r singlepool@_RUN_26.12.2014
</source>
In der zpool history fehlt massiv viel
<source lang="bash">
[root@flatbert:~]# zpool status -v singlepool
pool: singlepool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: none requested
config:
NAME STATE READ WRITE CKSUM
singlepool ONLINE 0 0 40
sdb4 ONLINE 0 0 160
errors: Permanent errors have been detected in the following files:
singlepool/rpool/disk6:/var/log/daemon.log
singlepool/rpool/disk6:/var/log/syslog.1
singlepool/rpool/disk12:/var/spool/exim4/input/1YMghG-0001t5-E8-H
singlepool/rpool/disk19:/home/admin/.ghc/x86_64-linux-7.8.4/package.conf.d/package.cache
singlepool/rpool/disk19:<0xf84c>
singlepool/rpool/disk19:<0xf84d>
singlepool/rpool/disk19:/home/admin/.cabal/logs
singlepool/rpool/disk19:/broken_dir
singlepool/rpool/disk24:<0x0>
</source>
Die Pool Metadaten sind nicht defekt und die korrupten datasets werden nicht aufgelistet !!!
=== löschen von Corrupted File or Directory ===
zfs mount singlepool/rpool/disk6 ... hat funktioniert!
<source lang="bash">
zpool scrub singlepool
2015-02-16 20:05:59 +01:00
[root@flatbert:~]# zpool status -v
pool: singlepool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub repaired 16K in 0h25m with 42 errors on Mon Feb 16 20:05:10 2015
config:
NAME STATE READ WRITE CKSUM
singlepool ONLINE 0 0 204
sdb4 ONLINE 0 0 806
errors: Permanent errors have been detected in the following files:
2015-02-16 19:40:59 +01:00
2015-02-16 20:05:59 +01:00
singlepool/rpool/disk6:<0x7391>
/rpool/disk6/var/log/syslog.1
singlepool/rpool/disk12:/var/spool/exim4/msglog/1YMghG-0001t5-E8
singlepool/rpool/disk12:/var/spool/exim4/input/1YMghG-0001t5-E8-H
singlepool/rpool/disk19:<0x0>
singlepool/rpool/disk19:/home/admin/.ghc/x86_64-linux-7.8.4/package.conf.d/package.cache
singlepool/rpool/disk19:<0xe11f>
singlepool/rpool/disk19:<0xe135>
singlepool/rpool/disk19:<0xf84c>
singlepool/rpool/disk19:/home/admin/.cabal/logs
singlepool/rpool/disk19:<0xe1a7>
singlepool/rpool/disk19:<0xe1a8>
singlepool/rpool/disk19:/broken_dir
singlepool/rpool/disk24:<0x0>
[root@flatbert:~]#
2015-02-16 19:54:28 +01:00
</source>
2015-02-16 23:05:13 +01:00
z.B.
<source lang="bash">
zfs mount singlepool/rpool/disk19
rm -rfv /rpool/disk19/broken_dir
zfs umount -a
zpool scrub singlepool
</source>
2015-02-16 19:54:28 +01:00
=== manuelles remote snapshot backupen der funktionierenden datasets ===
<source lang="bash">
for i in $(zfs list -t snapshot | grep "_BACKUP_24.12.2014" | awk '{print $1}'); do zfs send $i | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/$(echo $i | sed 's/@_BACKUP_24.12.2014//g'); done
</source>
=== manuelles remote snapshot backupen der invisible datasets ===
<source lang="bash">
2015-02-16 22:46:24 +01:00
zfs send singlepool/rpool/disk1@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk1
zfs send singlepool/rpool/disk5@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk5
zfs send singlepool/rpool/disk6@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk6
zfs send singlepool/rpool/disk12@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk12
zfs send singlepool/rpool/disk15@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk15
zfs send singlepool/rpool/disk16@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk16
zfs send singlepool/rpool/disk18@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk18
zfs send singlepool/rpool/disk20@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk20
zfs send singlepool/rpool/disk24@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk24
zfs send singlepool/rpool/disk27@_BACKUP_24.12.2014 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-old/singlepool/rpool/disk27
</source>
2015-02-16 19:54:28 +01:00
2015-02-16 22:46:24 +01:00
<source lang="bash">
cannot open 'singlepool/rpool/disk24': I/O error
2015-02-16 19:40:59 +01:00
</source>
2015-02-16 23:37:48 +01:00
=== aktuelle snapshots sichern ===
<source lang="bash">
zpool clear -F singlepool
zfs snapshot singlepool/rpool/disk1@_BROKEN_16.02.2015
... bis ...
zfs snapshot singlepool/rpool/disk30@_BROKEN_16.02.2015
zfs send singlepool/rpool/disk1@_BROKEN_16.02.2015 | ssh root@172.22.99.10 zfs recv zroot/BACKUP/flatbert-broken/disk1
</source>
2015-02-17 00:49:13 +01:00
== ZFS BACKUP Festplatte (sdd) ==
<source lang="bash">
parted /dev/sdd
mklabel
gpt
mkpart
backup
zfs
0%
50%
</source>