Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fails to recv with ZoL version 6.5.4 when dataset doesn't already exist. #2

Closed
redog opened this issue Apr 27, 2017 · 8 comments
Closed

Comments

@redog
Copy link

redog commented Apr 27, 2017

#zfs_autobackup --ssh-source kvm1 --clear-mountpoint kvm1 data --debug --strip-path 1

# ssh 'kvm1' 'zfs' 'get' '-t' 'volume,filesystem' '-o' 'name,value,source' '-s' 'local,inherited' '-H' 'autobackup:kvm1'
# ssh 'kvm1' 'zfs' 'snapshot' 'datastore/Spice_C@kvm1-20170426151419' 'datastore/Spice_ContentLibrary@kvm1-20170426151419' 'datastore/Spice_ProgSys@kvm1-20170426151419'
# ssh 'kvm1' 'zfs' 'list' '-d' '1' '-r' '-t' 'snapshot' '-H' '-o' 'name' 'datastore/Spice_C' 'datastore/Spice_ContentLibrary' 'datastore/Spice_ProgSys'
Source snapshots: {'datastore/Spice_C': ['kvm1-20170426151419'],
 'datastore/Spice_ContentLibrary': ['kvm1-20170426151419'],
 'datastore/Spice_ProgSys': ['kvm1-20170426151419']}
# zfs 'list' '-d' '1' '-r' '-t' 'snapshot' '-H' '-o' 'name' 'data/Spice_C' 'data/Spice_ContentLibrary' 'data/Spice_ProgSys'
cannot open 'data/Spice_C': dataset does not exist
cannot open 'data/Spice_ContentLibrary': dataset does not exist
cannot open 'data/Spice_ProgSys': dataset does not exist
Target snapshots: {}
# zfs 'create' '-p' 'data'
# ssh 'kvm1' 'zfs' 'send' '-p' '-v' 'datastore/Spice_C@kvm1-20170426151419' | zfs 'recv' '-u' '-v' 'data/Spice_C'
cannot receive: failed to read from stream
Verifying if snapshot exists on target
# zfs 'list' 'data/Spice_C@kvm1-20170426151419'
cannot open 'data/Spice_C@kvm1-20170426151419': dataset does not exist
Traceback (most recent call last):
  File "/opt/bin/zfs_autobackup", line 484, in <module>
    ssh_target=args.ssh_target, target_filesystem=target_filesystem)
  File "/opt/bin/zfs_autobackup", line 285, in zfs_transfer
    run(ssh_to=ssh_target, cmd=["zfs", "list", target_filesystem+"@"+second_snapshot ])
  File "/opt/bin/zfs_autobackup", line 70, in run
    raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
subprocess.CalledProcessError: Command '['zfs', 'list', 'data/Spice_C@kvm1-20170426151419']' returned non-zero exit status 1

I'm able to get around the error if I first manually run the same command without the switches "-p -v" & "-u -v"
i.e. #ssh kvm1 zfs send datastore/Spice_C@kvm1-20170426151419 | zfs recv data/Spice_C

Once the datasets marked with autobackup:kvm1=true exist on both zfs hosts the program completes without error.

I don't know if it's something specific with my configuration.

I'm using --strip-path 1 because the zpool names are different and it failed to create data/datastore or data/datastore/Spice_C without that option


#zfs_autobackup --ssh-source kvm1 --clear-mountpoint kvm1 data --verbose
Getting selected source filesystems for backup kvm1 on kvm1
Selected: datastore/Spice_C (direct selection)
Selected: datastore/Spice_ContentLibrary (direct selection)
Selected: datastore/Spice_ProgSys (direct selection)
Creating source snapshot kvm1-20170426141821 on kvm1 
Getting source snapshot-list from kvm1
Getting target snapshot-list from local
cannot open 'data/datastore/Spice_C': dataset does not exist
cannot open 'data/datastore/Spice_ContentLibrary': dataset does not exist
cannot open 'data/datastore/Spice_ProgSys': dataset does not exist
Tranferring datastore/Spice_C initial backup snapshot kvm1-20170426141235
cannot receive: failed to read from stream
cannot open 'data/datastore/Spice_C@kvm1-20170426141235': dataset does not exist
Traceback (most recent call last):
  File "/opt/bin/zfs_autobackup", line 484, in <module>
    ssh_target=args.ssh_target, target_filesystem=target_filesystem)
  File "/opt/bin/zfs_autobackup", line 285, in zfs_transfer
    run(ssh_to=ssh_target, cmd=["zfs", "list", target_filesystem+"@"+second_snapshot ])
  File "/opt/bin/zfs_autobackup", line 70, in run
    raise(subprocess.CalledProcessError(p.returncode, encoded_cmd))
subprocess.CalledProcessError: Command '['zfs', 'list', 'data/datastore/Spice_C@kvm1-20170426141235']' returned non-zero exit status 1

@redog
Copy link
Author

redog commented Apr 27, 2017

After working with the system some more I think it's the zfs version on the other machine (kvm1) causing it and not the version on the backup host. The backup machine is on 6.5.8. I have a third with 6.5.8 that doesn't give the same error as when trying to backup the datasets from the 6.5.4 one.

@redog redog changed the title Fails to recv with ZoL version 6.5.8 when dataset doesn't already exist. Fails to recv with ZoL version 6.5.4 when dataset doesn't already exist. Apr 27, 2017
@psy0rz
Copy link
Owner

psy0rz commented Jul 25, 2017

ok then i'll close this one for now :)

@psy0rz psy0rz closed this as completed Jul 25, 2017
@ghan1t
Copy link

ghan1t commented Nov 2, 2023

@psy0rz I run into the same error and thought maybe you can help me here instead of opening a new issue. I'm running on TrueNAS Scale 22.12.4
The first time I run zfs-autobackup with a single dataset inside a pool (both selected) without any snapshots existing on a local backup disk.

  #### Selecting
# [Source] Getting selected datasets
# [Source] CMD    > (zfs get -t volume,filesystem -o name,value,source -H autobackup:test)
# [Source] Daten: Checking if dataset is changed
  [Source] Daten: Selected
# [Source] Daten/Dateien: Checking if dataset is changed
  [Source] Daten/Dateien: Selected
  
  #### Snapshotting
# [Source] Daten: Getting snapshots
# [Source] CMD    > (zfs list -d 1 -r -t snapshot -H -o name Daten)
# [Source] Daten: Getting bytes written since our last snapshot
# [Source] CMD    > (zfs get -H -ovalue -p written@Daten@test-20231102225200 Daten)
  [Source] Daten: No changes since test-20231102225200
# [Source] Daten/Dateien: Getting snapshots
# [Source] CMD    > (zfs list -d 1 -r -t snapshot -H -o name Daten/Dateien)
# [Source] Daten/Dateien: Getting bytes written since our last snapshot
# [Source] CMD    > (zfs get -H -ovalue -p written@Daten/Dateien@test-20231102222240 Daten/Dateien)
  [Source] Daten/Dateien: No changes since test-20231102222240
  [Source] No changes anywhere: not creating snapshots.
  
  #### Target settings
  [Target] Keep the last 20 snapshots.
  [Target] Keep every 1 minute, delete after 1 day.
  [Target] Keep every 1 day, delete after 1 week.
  [Target] Keep every 1 week, delete after 1 month.
  [Target] Keep every 1 month, delete after 1 year.
  [Target] Receive datasets under: BackupExt8TB/Backup
  
  #### Synchronising
# [Target] BackupExt8TB/Backup: Checking if dataset exists
# [Target] CMD    > (zfs list BackupExt8TB/Backup)
# Checking target names:
# [Source] Daten: -> BackupExt8TB/Backup/Daten
# [Source] Daten/Dateien: -> BackupExt8TB/Backup/Daten/Dateien
# [Source] zpool Daten: Getting zpool properties
# [Source] CMD    > (zpool get -H -p all Daten)
# [Target] zpool BackupExt8TB: Getting zpool properties
# [Target] CMD    > (zpool get -H -p all BackupExt8TB)
# [Source] Daten: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Daten)
# [Target] BackupExt8TB/Backup/Daten: Determining start snapshot
# [Target] BackupExt8TB/Backup/Daten: Checking if dataset exists
# [Target] CMD    > (zfs list BackupExt8TB/Backup/Daten)
# [Target] STDERR > cannot open 'BackupExt8TB/Backup/Daten': dataset does not exist
# [Target] BackupExt8TB/Backup/Daten: Getting snapshots
# [Target] CMD    > (zfs list -d 1 -r -t snapshot -H -o name BackupExt8TB/Backup/Daten)
! [Target] STDERR > cannot open 'BackupExt8TB/Backup/Daten': dataset does not exist
! [Target] Command "zfs list -d 1 -r -t snapshot -H -o name BackupExt8TB/Backup/Daten" returned exit code 1 (valid codes: [0])
! [Source] Daten: FAILED: Last command returned error
  Debug mode, aborting on first error
! Exception: Last command returned error

I understand that it tries to find existing snapshots on the target, but why does the script fail if it obviously does not exist yet?
I tried what redog wrote above:

  • --strip-path 1 failed with a different error message
# [Target] BackupExt8TB/Backup: Determining start snapshot
# [Target] BackupExt8TB/Backup: Getting snapshots
# [Target] CMD    > (zfs list -d 1 -r -t snapshot -H -o name BackupExt8TB/Backup)
# [Target] BackupExt8TB/Backup: Getting zfs properties
# [Target] CMD    > (zfs get -H -o property,value -p all BackupExt8TB/Backup)
! [Source] Daten: FAILED: 'NoneType' object has no attribute 'filesystem_name'
  Debug mode, aborting on first error
! Exception: 'NoneType' object has no attribute 'filesystem_name'
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/mnt/AppPool/autobackup/zfs-autobackup/__main__.py", line 3, in <module>
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsAutobackup.py", line 574, in cli
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsAutobackup.py", line 529, in run
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsAutobackup.py", line 390, in sync_datasets
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsDataset.py", line 1195, in sync_snapshots
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsDataset.py", line 753, in transfer_snapshot
AttributeError: 'NoneType' object has no attribute 'filesystem_name'
  • when I run zfs send | zfs recv first, then zfs-autobackup runs through, but once there are new changes and snapshots on source, the eror appears again:
  #### Synchronising
# [Target] BackupExt8TB/Backup: Checking if dataset exists
# [Target] CMD    > (zfs list BackupExt8TB/Backup)
# Checking target names:
# [Source] Daten: -> BackupExt8TB/Backup/Daten
# [Source] Daten/Dateien: -> BackupExt8TB/Backup/Daten/Dateien
# [Source] zpool Daten: Getting zpool properties
# [Source] CMD    > (zpool get -H -p all Daten)
# [Target] zpool BackupExt8TB: Getting zpool properties
# [Target] CMD    > (zpool get -H -p all BackupExt8TB)
# [Source] Daten: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Daten)
# [Target] BackupExt8TB/Backup/Daten: Determining start snapshot
# [Target] BackupExt8TB/Backup/Daten: Checking if dataset exists
# [Target] CMD    > (zfs list BackupExt8TB/Backup/Daten)
# [Target] BackupExt8TB/Backup/Daten: Getting snapshots
# [Target] CMD    > (zfs list -d 1 -r -t snapshot -H -o name BackupExt8TB/Backup/Daten)
# [Source] Daten@test-20231102225200: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Daten@test-20231102225200)
# [Target] BackupExt8TB/Backup/Daten@test-20231102225200: Getting zfs properties
# [Target] CMD    > (zfs get -H -o property,value -p all BackupExt8TB/Backup/Daten@test-20231102225200)
# [Target] BackupExt8TB/Backup/Daten@test-20231102225200: common snapshot
# [Target] BackupExt8TB/Backup/Daten: Getting zfs properties
# [Target] CMD    > (zfs get -H -o property,value -p all BackupExt8TB/Backup/Daten)
# [Source] Daten/Dateien: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Daten/Dateien)
# [Target] BackupExt8TB/Backup/Daten/Dateien: Determining start snapshot
# [Target] BackupExt8TB/Backup/Daten/Dateien: Checking if dataset exists
# [Target] CMD    > (zfs list BackupExt8TB/Backup/Daten/Dateien)
# [Target] BackupExt8TB/Backup/Daten/Dateien: Getting snapshots
# [Target] CMD    > (zfs list -d 1 -r -t snapshot -H -o name BackupExt8TB/Backup/Daten/Dateien)
# [Source] Daten/Dateien@test-20231102222240: Getting zfs properties
# [Source] CMD    > (zfs get -H -o property,value -p all Daten/Dateien@test-20231102222240)
# [Target] BackupExt8TB/Backup/Daten/Dateien@test-20231102222240: Getting zfs properties
# [Target] CMD    > (zfs get -H -o property,value -p all BackupExt8TB/Backup/Daten/Dateien@test-20231102222240)
# [Target] BackupExt8TB/Backup/Daten/Dateien@test-20231102222240: common snapshot
# [Target] BackupExt8TB/Backup/Daten/Dateien: Getting zfs properties
# [Target] CMD    > (zfs get -H -o property,value -p all BackupExt8TB/Backup/Daten/Dateien)
! [Source] Daten/Dateien: FAILED: 'NoneType' object has no attribute 'filesystem_name'
  Debug mode, aborting on first error
! Exception: 'NoneType' object has no attribute 'filesystem_name'
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/mnt/AppPool/autobackup/zfs-autobackup/__main__.py", line 3, in <module>
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsAutobackup.py", line 574, in cli
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsAutobackup.py", line 529, in run
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsAutobackup.py", line 390, in sync_datasets
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsDataset.py", line 1195, in sync_snapshots
  File "/mnt/AppPool/autobackup/zfs-autobackup/zfs_autobackup/ZfsDataset.py", line 753, in transfer_snapshot
AttributeError: 'NoneType' object has no attribute 'filesystem_name'

@psy0rz
Copy link
Owner

psy0rz commented Nov 2, 2023

Which version of autobackup?

@ghan1t
Copy link

ghan1t commented Nov 2, 2023

wow that was fast :-)

zfs-autobackup v3.3-beta - (c)2022 E.H.Eefting ([email protected])

I should also note that I'm using it with these instructions, as there is no pip on TrueNAS to install it: #83 (comment)

@psy0rz
Copy link
Owner

psy0rz commented Nov 2, 2023

Yeah, master is currently broken because work in progress.

Try v3.2.2, or try commit a62e793 , which doesnt have the work in progress stuff. (and it has a bunch of new cool stuff which might help you)

Edwin

@ghan1t
Copy link

ghan1t commented Nov 5, 2023

I have tried with the commit you mentioned and the backup now works! Thanks a lot for your help. I should have realized that checking out master might not be a stable release 🤦🏻‍♂️ I will try to write a script that takes care of that and mention it in the other Issue.

One more (noob) question I have if I may: If I run zfs-autobackup I see the snapshots on the target and zfs list shows the datasets. I cannot however cd into them and the TrueNAS GUI shows an error when clicking on the dataset ".ZFSException: cannot get used/quota for : dataset is busy". I read on the TrueNAS forum where someone had that problem with a read-only dataset - which would make sense as the backed up snapshots are read only.
If I run send | recv manually though, I do not get that error and I can cd into the dataset.
Does zfs-autobackup only backup snapshots and not mount a writable dataset while send | recv does?
Is there any way I can get rid of this error message?

@psy0rz
Copy link
Owner

psy0rz commented Nov 16, 2023

Yes, after initial sync zfs-autobackup v3.2 does not mount the datasets. Use zfs mount -a for that.

However this will be fixed if you have 3.3 beta.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants