![]() My thought is to pass the 12 drives to the TrueNAS VM, but Im not sure if it will be able to recognize the zpool. I cant pass the lsi card to the VM as my boot drive mirror is on there as well. ZFS properties are inherited from the parent dataset, so you can simply set defaults on the parent dataset. The backend uses ZFS datasets for both VM images (format raw) and container data (format subvol ). I have a 12 disk raidz2 pool on my proxmox host (r510) that I need to pass to my TrueNAS VM. ZFS is probably the most advanced storage type regarding snapshot and cloning. if you use them for other things as well, you can choose which of the units to use (or even write your own, which e.g. Need to move ZFS pool from proxmox host to TrueNAS VM. In either case, PVE will import any configured pools on first use anyway if they are not yet imported - so if you only use your pools for guest storage, you actually don't need either of the systemd units. zfs-import-scan systemd unit, which will import all found pools on block devices (a bit more "expensive", and is only triggered if NO cache file exists).zfs-import-cache systemd unit, which will import pools using a cache file (so only those in that cache file get imported, and no scanning of block devices is done).There are two regular ways for zpools to get imported on boot, which are mutually exclusive: #Impot openzfs pool to proxmox full#has a non-rpool pool, so the full disk is usable there (and no, this should not be a problem, neither for regular disks nor for NVME maybe your cache file did get corrupted? you can delete it and set cachefile=none on all your pools to not use it at all, or you can delete it and export/import your pools to regenerate it (check the cachefile setting). : please don't confuse this issue further you have a rpool, which needs partitions other wise it is not bootable. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |