Replies: 2 comments 4 replies
-
like the tips say:
Just set the canmount option to noauto for those datasets and reboot. Then they wont mount and it will be fine. |
Beta Was this translation helpful? Give feedback.
-
It's more about "general ZFS knowledge" (mostly regarding transferring datasets between systems) and the right choice of options greatly depends on the setup and what you want to achieve. That being said, if you have explicit mountpoints (=not inherited from parent dataset) on the datasets to back up, you might want to clear the mountpoints (with the On the other hand if you want to preserve the mountpoint property with the backed up dataset, you should probably use other means to prevent automatically mounting the datasets on the target host, and the ideal way to do this again depends on your general setup. For example, if the target host does not use ZFS for anything but holding the backups, disabling zfs automount globally on boot could be an option. Another option could be using the canmount property, which can be set with the Another example is the kind of setup that I have; none of the datasets to back up have explicit mountpoints, they all inherit the mountpoint from their respective parent dataset, and on the backupper target host the parent dataset has no mountpoint so the backed up datasets inherit the "no mountpoint" and thus I don't need to clear the mountpoint or set the canmount property. |
Beta Was this translation helpful? Give feedback.
-
I seem to be having more trouble than the average bear getting on to how to use zfs_autobackup.
My test setup has once again become unusable. It consists of 2 VBox vms and two real hardware hosts that act as the --ssh-source hosts
So briefly I followed the steps laid out in the wiki
first marked the zfs fs to be backedup (replicated I guess its called)
In the wiki its like this:
zfs set autobackup:offsite1=true rpool
In my test example it was a little more ambitious and named differently
I seem to be having more trouble than the average bear getting on to how to use zfs_autobackup.
My test setup has once again become unusable. It consists of 2 VBox vms and two real hardware hosts that act as the --ssh-source hosts
So briefly I followed the steps laid out in the wiki
first marked the zfs fs to be backedup (replicated I guess its called)
In the wiki its like this:
zfs set autobackup:offsite1=true rpool
In my test example it was a little more ambitious and named differently
So on the ssh-source host
zfs set autobackup:snr=true rpool p0 p1 bpool
Followed by the step where you pull backup
From wiki:
zfs-autobackup -v --ssh-source pve01 offsite1 data/backup/pve01
My test setup: The command below is being run from VBox vm in role of backup server.
(OS on backup server is openindiana) [An solaris like offshoot from OpenSolaris]
zfs-autobackup -v --ssh-source 2x snr p2/42/za/2x
This all went off well and collected a thorough starter backup.
The trouble starts when 2 days later I get around to rebooting backup server
The reboot hits some snags and offers login for root to have a shot at it.
The major complaint seems to be:
"Mounting zfs filesystems: svc:/system/filesysystem/local:default: WARNING: /usr/sbin/zfs mount -a failed:
Exit status 1
You can see in the upper part a list of `cannot mount' Directory is not empty. All the mountpoints are already taken somehow. The replicated zfs fs. How do you forestall something like this happening. I don't recall seeing anything in the wiki docs about changing mountpoints, and of course all the replicated fs do have mount points.
I can still login to the VM so please let me know what to run to gather information you can use
I've generated a display with zfs list -r
zfslist-r.txt
Everything under pool "p2" is a replicated FS and you can see by the mountpoints that the replicated stuff is setting where OS filesystems should be.
I can get it all back I think but need to know where I went so badly wrong
Beta Was this translation helpful? Give feedback.
All reactions