ODROID H4 Ultra NAS OS

 

I previously wrote about the hardware I bought for my new NAS. In this post, I’m going to go through the choices I made for the operating system and filesystem layout.

Operating System

Every great server starts with an operating system. For my NAS, I picked Ubuntu 24.04.

While there are a lot of great choices, this one seemed like a natural fit for me. I’ve been using Ubuntu for a long time, almost 20 years, so I’m familiar with it. That version is also an LTS version, so I’ll get plenty of life out of it before I even need to contemplate upgrading.

In terms of functionality, I wanted two things. First, ZFS out-of-the-box, and second, container support. Ubuntu checks both of those boxes as well, so this was an easy decision.

I downloaded the “server” install, wrote it to a USB drive, booted and installed to my OS drive. Because the ODROID H4 Ultra is an Intel system, nothing special needed to be done and everything just worked.

Package Management

With Ubuntu, package management is a bit of a challenge. You, of course, have apt, which is great if the software you want is in the base repos and it’s up-to-date enough, but what do you do if the software you want isn’t in the base repos? The good news is that there are some options available.

This is where I landed:

  1. Using apt and the official repos for CLI tools, if it’s in the repos and up-to-date. This keeps what’s installed through apt to a minimum. Since I’m not adding additional repos or very many packages, this should in theory make the upgrades go smoothly.
  2. If it’s not in apt, then I’m reaching for brew. You might not know this, but brew is in fact available on Linux! What’s nice is that it has a good selection of packages and they’re kept up-to-date. If it’s written in Go or Rust, there’s a pretty good chance that there’s a brew package for it.
  3. There were a few items I couldn’t get with the first two options, and unfortunately I just had to pull binaries directly from GitHub for those. It’s manual work to update them which isn’t great, but it works. In the future, I may give stew or bin a try. For now, it’s such a small number that I can manage it manually.

Beyond that, I’ve got services that I’ll be installing. Those will for the most part run in containers under Podman with Quadlets. More on that in a future post.

Filesystem

Because this is a NAS we’re talking about, probably the most important aspect is the filesystem. For that, I went with ZFS. There are other interesting filesystems on Linux, but I’m not looking for interesting. It’s undeniable that ZFS is the gold standard, and that’s what I want for my data. End of story.

Here’s how I set up the drives for storage:

  1. I have my system drive separate. I just happened to have a SATA spinning rust drive lying around that I could use. I didn’t put ZFS on that drive, because it’s the root drive and I didn’t want that complication, so it’s just plain ol’ EXT4.

  2. My two main SATA SSD drives are free to be used as my ZFS pool. They’re 1TB each, so that gives me a 1TB SSD mirror.

  3. I have two USB drives for backup. They’re 2TB each, so that gives me a 2TB USB mirror.

In the future, I plan to move the system drive to an NVMe drive, and then throw in two more SATA SSD drives. Who knows when that’ll happen now that prices on drives have gone through the roof though.

For the pools, here’s my setup.

  1. I followed some instructions to create the pool, specifically using a mirror for redundancy. Getting the block size right was confusing and I’m not honestly sure I got it right.

    I tried but wasn’t able to find any docs from Inland about what their SSD drives support. The drives reported 512 byte block but what I read online seemed to indicate that most drives lie, for backwards compatibility, so I followed these instructions and went with 4k blocks.

    I also turned off atime by setting atime=off, and set xattr to sa (putting them in the inodes). I don’t think either will make a huge impact, but they shouldn’t hurt anything either. Lastly, I enabled compression and set the default record size to 128k.

  2. I then created a bunch of data sets, turning off compression and going with a 1M record size for the main/media data set. Everything on there will already be compressed and it’ll be larger files, so that seemed to make sense.

    • main/media
    • main/media/music
    • main/media/movies
    • main/media/photos
    • main/files
    • main/backups

    I created the same data sets with the same settings on my USB pool.

Data Migration

With pools and data sets created, it was time to move data off my old NAS. That was also using ZFS, so my plan was to use ZFS send/receive to push the data across my LAN.

This required:

  1. Taking snapshots on the old system. I just created some snapshots based on the current date.
  2. On the new system, I ran nc -w 120 -l -p 8023 -v | sudo zfs receive <data set>.
  3. On the old system, I ran zfs send <old snapshot> | nc -w 20 192.168.2.16 8023.

I went with nc because this was all over my LAN, so I didn’t really need encryption. Plus, the overhead for encrypting it really slowed down the old system and brought the data transfers to a crawl.

The initial transfer took a while, and when that was done, I took new snapshots and repeated the send/receive but with the -i -R flags to do an incremental transfer. That allowed me to make sure nothing changed while I was originally sending the files over.

And with that, I was able to power down the old system.

That was not quite the end of the data migration but it was the interesting part. The data sets from my old system were structured a bit different so I did some clean up and moving of files around into the new data set structure.

ZFS Sync & Snapshots

I mentioned previously having a pair of USB drives as part of my backup strategy. This represents my local, on-site backup.

This is accomplished using Sanoid and Syncoid. The former is a snapshot management tool, and the latter performs synchronization of snapshots between different pools. The combination of the two gives you a nice way to backup your system using ZFS.

Both Syncoid and Sanoid are configured to run through systemd timers. I have Sanoid running every 15 minutes and Syncoid running daily. Taking the snapshots is lightweight, while the sync is a bit heavier so that’s why I have it running less frequently. Your milage may vary.

Syncoid

This was the easy part. On the server, Syncoid is configured to replicate from the main pool to the USB pool. This happens daily, and it copies all of the data sets from one pool to the other recursively including snapshots.

/usr/sbin/syncoid --recursive --skip-parent main usb

As the Syncoid documentation notes:

By default, snapshots created on the host are not destroyed on the destination when they are removed from the host.

For me, this means that snapshots are replicated from the main pool to the USB pool, but Sanoid does not remove snapshots from USB pool ever. This is important because it allows me to have separate policies for how long I keep snapshots on the pools.

At some point, I do need to clean up snapshots though or my disks will fill up. That’s where Sanoid comes into play.

Sanoid

Sanoid was a little trickier because I had to decide on the policies that I wanted in terms of snapshot retention.

This is what I ended up using:

[main]
        use_template = main
        recursive = yes
        process_children_only = yes

[usb]
        use_template = backup
        recursive = yes
        process_children_only = yes


#############################
# templates below this line #
#############################

[template_main]
        frequently = 0
        hourly = 24
        daily = 7
        monthly = 3
        yearly = 0
        autosnap = yes
        autoprune = yes

[template_backup]
        # remove outdated snapshots
        autoprune = yes
        # do not take new snapshots though, snapshots come from syncoid
        autosnap = no
        # here's what we keep for backup
        frequently = 0
        hourly = 24
        daily = 14
        monthly = 12
        yearly = 1

Here are the highlights:

  • There are different snapshot policies for each pool. On the main pool, I don’t keep snapshots for as long. I keep them longer on the USB pool because the drives are larger and the intent is to keep them for backup.
  • On main, I’m using the autosnap and autoprune settings. This just means Sanoid is taking the snapshots and cleaning them up.
  • On the USB pool, I’m using autoprune to remove old snapshots, but it’s configured to not take snapshots. This may be surprising, but it is because Sanoid is working with Syncoid here. Snapshots come from Syncoid and the main pool. They get copied to this pool so we never actually need to create snapshots on the USB pool.

Bad RAM

While most of this process went smoothly, there was one wrinkle. The original stick of RAM I got was bad. It’s my fault for not using MemTest86 before installing anything.

I first noticed issues during the initial data transfer from the old NAS. I kept having the send/receive process fail. While I’ve used ZFS before, it was through TrueNAS so I wasn’t as hands-on with it. Because of my lack of familiarity, I misinterpreted these failures as network issues. Plus, the send/receive eventually completed successfully, so I moved on.

Where I really started to realize something was up was after letting the system run for about a week. At that point, I ran zpool status -xv and started seeing file errors. I thought maybe one of my disks was bad, but I was getting errors on both of my brand new disks, and that seemed less likely. I was also getting errors on my USB pool and those disks had been attached to my previous NAS so I knew they were OK.

That’s what prompted me to finally run MemTest86, and wouldn’t you know it, the RAM stick was bad. After an RMA and waiting for two weeks, the new stick came in. I naturally ran MemTest86 on it, and it was fine.

The only task left now was recovering the corrupted files. Fortunately, zpool status -xv gives you a list of the corrupted files, and I was able to restore those from the old NAS.