Replies: 1 comment
-
Sorry about what appears to be a repost. I wrote a whole different subject and different text on a new subject but somehow the board has submitted a previous message. Please ignore the repost... not what I tried to do. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I suspect much of the problem I'm seeing is my own doing due to lack of due diligence (reading docs)
BUT, I've competely trashed two vms. I mean took them to the curb experimenting with zfs-autobackup
Apparently za scatters zfs holds throughout a pull with nary a word. When backup vm is rebooted for any reason, swaths of the zfs filesystems are unmountable apparently and causing the vm to be non-bootable. At first I had no idea what was going on and my own pestilential fiddling broke the two vms irrevocably.
Currently running two openindiana VBox vms whereon I have installed pip and zfs-autobackup.
A side issue may be that the openindiana crew have decided to pull away from python and phasing it out.
It has an older version ... something before 2.7.
Even with that I think pip would pull out a nicely uptodate `zfs_autobackup'
Back to the vms. I see quite a list of `holds'. Maybe not as many as I though but 2 runs of pull backup on only a fewish ZFS fs
I find 12 holds in there:
I setup the pull so the returning fs would start at backup server p0 pool so p0/ssh-client-server/ and a few more ontop in a few cases.
I can find and remove the holds now that I see what is happening but isn't there a nifty way to get them all at once especially since zfs_autobackup so gallantly and librally throw them in?
'
Beta Was this translation helpful? Give feedback.
All reactions