
#Openzfs yosemite archive
Jul 1 14:11:50 installer : Product archive /Users/eric/Desktop/OpenZFS on OS X 1.3.2-RC1 Yosemite or higher.pkg trustLevel=202 Installer: Package name is OpenZFS on OS X Jul 1 14:11:50 installer : FIXME: IOUnserialize has detected a string that is not valid UTF-8, "(null)". In some case, we saw a +30% reduction of "real" space used.Sudo installer -pkg /Users/eric/Desktop/OpenZFS\ on\ OS\ X\ 1.3.2-RC1\ Yosemite\ or\ higher.pkg -target / -verbose -dumplog Specifically, any type of DB file (MySQL, PGSQL) and other text-type files seemed to compress much better. Once the data was restored, we noticed the compression stats on the volumes were much higher than before. Once the data sets were created, we copied our data back. This time, we chose zstd compression instead of lz4.
#Openzfs yosemite upgrade
In order to recreate the same pool/volume layout, we dumped all the ZFS details to a text file prior to the upgrade.ĭuring the upgrade process, we copied all the data to another backup server, created a new, single RAID-2Z setup (8x 16TB drives - ashift 12), recreated the same data sets, and set 1MB record size for all data sets. From there, we created a bunch of data sets - each with 1MB record size and lz4 compression. The server had 2x RAIDZ-1 pools - each with 4x 16TB drives (ashift=12). We have a Supermicro server with 8x 16T drives running Debian 10 and and OpenZFS 0.8.6. Wondering if anyone else has noticed the same behavior. After the upgrade, we are noticing much higher compression ratios when switching from lz4 to zstd. Last week we decided to upgrade one of our backup servers from OpenZFS 0.8.6 to OpenZFS 2.0.3.
#Openzfs yosemite full
Did this log_spacemap feature get backed out perhaps? Do I need to pick between a full backup and recreation of the pool or manually building zfs from git to get the feature? Or can I get the old 2.0.0 packages somehow? I've uninstalled and reinstalled multiple times to make sure it really is running zfs 2.0.4 and not 0.8.x or something. Required feature(s), or recreate the pool from backup. "-o readonly=on", access the pool on a system that supports the ItĬannot be accessed in read-write mode because it uses the followingĬom.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)Īction: The pool cannot be imported in read-write mode. Status: The pool can only be accessed in read-only mode on this system. When I got back I had to reinstall the zfs modules, this time getting 2.0.4 from the zfs-testing yum repository but when I come to import the dataset, I get: state: UNAVAIL I was away for a couple of weeks and a colleague had put kernel updates on the machine. I had created a fresh pool with a special vdev. This worked fine for some months and there were no issues. In January, I enabled the zfs-testing repository on a RHEL 8 system and upgraded OpenZFS from it to get version 2.0.0. Does it do a checksum scan on all the blocks for each of the txg's? What does the -X option do under the hood. So how long should my command take to run? Is it going to go through all the data? I don't care about partial data loss for the files being transferred at that time, but I'm really hoping I can get all the older files that have been there for many weeks.ĮDIT: Another question. The difference is the first pool was transferring a large number of files between one dataset to another. I have another machine with 8TB x 7 drives and that pool is fine. The pool is 14TB x 8 drives setup as RAIDZ1. It's been running for 8 hours, and it's still running. Right now I'm running "sudo zpool import -nFX mypool". Tried "sudo zpool import -F mypool", and the same error. I tried importing the pool and it gave an I/O error and told me to restore the pool from a backup.


All the stuff is hooked up to a surge protector. Lightning hit the power lines behind our house, and the power went out.
