QNAP NAS

I replaced my ancient (but working) DLink NAS with a much newer and faster QNAP NAS (Model TS-464-8G). The QNAP hardware is nice: a compact package supports 4 SATA drives in a variety of RAID configurations, 2x NVMe drives, 2x 2.5GbE ports with option to add a 10GbE card, has a slick web-based user interface, and consumes relatively little power. It runs a custom linux on an a Celeron N5095.
I don’t like the custom linux.

UPS Support

Naturally, I want my data storage to be protected by a UPS and to automatically and safely shut down before the UPS battery is exhausted if there is an extended power outage. I use a CyberPower CP1500PFCLCD UPS (which I am very happy with so far) to protect several NUC servers, an L2 switch, and the NAS. The UPS is connected to one of the NUC proxmox servers via USB. I run NUT on that server, including the nut-server that allows other machines (such as the NAS) to access the UPS over the network as nut-clients. The problem is that QNAP makes this more difficult than it has to be. They support nut (which is nice), but they seem to have done so mainly to allow one QNAP NAS to access another QNAP NAS connected to the UPS.

This is what I had to do to get the QNAP NAS to run as a generic nut client:

  • Control Panel -> External Device -> UPS Tab
  • Select Network UPS Slave
  • Enter the IP address of your nut server
  • Apply changes
  • Reset the NAS to start the upsutil (nut-client daemon) running

How did the NAS get the NUT UPS name, user name, and password used on the nut-server? It didn’t; the NUT support from QNAP hard-coded them as ‘qnapups’, ‘admin’, ‘123456’. And folks wonder why QNAP has had security issues.

You can change the user name and password by enabling the admin user, logging into the NAS via ssh as the admin user, and editing /etc/config/ups/upsmon.conf (make a .orig copy first). Find the line that reads:
MONITOR qnapups@myNutServerIp 1 admin 123456 slave
and replace ‘admin’ and ‘123456’ with the user name and password you have assigned for slave devices on your nut server in /etc/nut/upsd.users

Unfortunately, QNAP doesn’t let you change the UPS name; it *must* be qnapups
Fortunately, NUT provides a workaround for this that doesn’t require changing all the other nut clients. On your nut server, edit your /etc/nut/ups.conf file and add a new dummy UPS named qnapups that points back to your real UPS. For example, my ups.conf ends with:

[cp1500]
    driver = usbhid-ups
    port = auto
    desc = "CyberPower CP1500PFCLCDa"
    vendorid = 0704
    productid = 0601

[qnapups]
    driver = dummy-ups
    port = cp1500@myNutServerIpAddress
    desc = "Proxy UPS for QNAP NAS"

Restart the nut-server (sudo service nut-server restart) and voila your QNAP can then see the UPS:

Proxmox VMs

It takes a lot of time to setup a server and then it must be maintained including regular backups. Virtualization can help with a lot of this. Modern computers have lots of cores, memory, and disk space so it is now possible to run multiple servers as virtual machines within a single physical server. This arrangement offers lots of advantages including:

  • Use resources efficiently – many servers only use a small fraction of the physical machines capability so you can easily run quite a few virtual servers on one physical machine.
  • Keeping servers and their environments separate helps avoid conflicts
  • Easily perform “bare metal” backups of virtual servers and restore them to the same or a new physical server for quick disaster recovery.
  • Easily allocate and expand resources (within the limits of the physical server)

I generally run my home servers on Intel NUC platforms because they offer a nice balance of computing power and power efficiency. A basic NUC 12 Pro with an i5-1240P or higher processor has at least 12 cores, up to 64GB of RAM, a fast NVMe gen 4 drive, 2.5GbE, and a TDP of only 28W. For bulk storage, you can use a NAS or connect a DAS via USB3.2 for very high speed. They stack, they’re small, quiet, and the low power consumption means a typical UPS will carry them through most outages. In short, they’re great little servers.

For virtualization, I like Proxmox. Proxmox is Debian based; it installs quickly from a USB flash drive and provides a friendly web-based management interface that is exactly what’s needed. It allows you to see the status and manage both the physical server and the VMs. It has a tightly integrated KVM hypervisor so you can access the console of each VM and the physical server remotely via the web interface.

Proxmox also makes it easy to make “bare-metal” backups which take a snapshot of the entire VM that can be easily restored in case of disaster either on the same physical server or a new server. The backup files are sparse and compressed; a machine with 64GB of storage that is using 24GB will yield a snapshot file ~12.5GB. You can download the snapshots and store them on bulk storage and off-site. The fact that Proxmox is so easy to install and that you can then restore entire VM snapshots quickly means that even if the physical server and/or storage failed completely, you can be back up and running on a new machine in less than 30 minutes. (I have done this)

Many servers don’t need a lot of compute power; most of mine do just fine with 4-8GB of RAM 2-4 cores, and 32-64GB of storage. This means I can host several servers on a NUC12 without it breaking a sweat. Keeping servers separate (e.g. database, middleware, web applications, etc.) makes it easy to scale and to upgrade individual servers without software or hardware conflict with others.

Update Sept 2024:
I have been using Proxmox for a few months now and still like it a lot; it simply works. The web-based management is perfect – exactly what’s needed for managing VMs with excellent status-at-a-glance and detailed configuration pages. I have resized VM hardware allocations on the fly several times and the interface is fast and intuitive. The bare-virtual-metal backups are easy to do and give me confidence about disaster recovery.

The only thing I think proxmox should change is their lowest tier (community) pricing/model:

  • I hate renting software; if they simply charged a fixed price with optional annual maintenance, most serious users (including me) would likely end up buying the maintenance anyway since nobody wants to run a server without the latest security fixes. However, I don’t want that decision forced on me; I like to own not rent.
  • The per-socket pricing effectively forces certain hardware choices: in particular, a single high-power server over a few low-power machines (like NUCs) that have been re-purposed. This removes a certain level of flexibility (the opposite of everything else about proxmox). For the community model, a site-license that covered a small number of machines or some aggregate level of computing power might make more sense.
  • Finally, $110/year/socket is well above the no-brainer cost for most users. I’d bet proxmox would net significantly more revenue if they lowered their community pricing to the ubiquitous $99 purchase + $49/year optional maintenance.

Network Attached Storage (NAS)

Everyone uses cloud storage these days, but I still find local storage useful, especially for large files like photos, videos, music and such. For local storage, I use network attached storage (NAS): a black box with redundant hard drives that is connected to my network. Anyone on the network can access the storage (assuming they have the appropriate permissions).

The NAS box should have at least 2 drives configured as RAID1 or 3 or more drives configured as RAID5 so that there is redundancy: if a hard drive fails (and it will), the information is mirrored on the other drive(s); this allows you to replace the failed drive with no loss of data. The NAS is always online making it a convenient place to backup the drives of desktops/laptops.

Note: it’s important to use hard drives designed for NAS storage (always on) such as the Western Digital Red NAS series or Seagate Exos series.

Although you can make any Linux computer a NAS, I’ve found dedicated NAS boxes to be very useful; they typically use little power, take little space, are quiet, and are meant to operate continuously for years. I’ve had quite a few NAS boxes made by D-Link starting with their DNS-321 and have now moved to a QNAP NAS.

QNAP TS-464

As of November 2024, I moved to using a QNAP TS-464-8G (see: datasheet). There was nothing wrong with the old DNS-320L which is still an amazing NAS, but I wanted something faster. The TS-464 is 10 years newer and is simply better in every dimension:

  • Celeron N5095 quad-core up to 2.9GHz
  • 8GB DDR4 RAM (expandable to 16-64GB)
  • Dual 2.5GbE Ethernet (can support 10GbE via expansion card)
  • Dual USB 3.2 Gen 2 (10Gbps)
  • 4x HDD bays
  • 2x NVMe slots
  • Advanced software including automatic snapshots
  • NASCompares Reviews: 2022, 2024
  • Should reach 280MB/s+ using a single 2.5GbE port
    (with a 2.5GbE switch)

I’ve outfitted the NAS with:

  • 16GB of DDR4 RAM (it came with 8GB and I added a spare 8GB SO-DIMM)
  • 3x Seagate Exos X16 ST14000NM005G 14TB hard disk drives
    (configured as RAID5 yielding roughly 25TB of net storage)
  • A spare Samsung Evo 970 Plus 1TB NVMe drive

  • Poor support for SFTP. although QNAP runs a custom linux variant, it is somewhat locked down and is missing some surprising things. For example, I use proxmox VMs for most of my server functions (and even some virtual workstations). Proxmox is awesome. Among other things, it makes it easy to make backups of any VM that you can easily restore, even to another machine (backups live in /var/lib/vz/dump). With my old NAS, I would create the VM backup(s) and have the NAS (which is in my secure network) sftp to the proxmox VM (which might be outside my secure network if it runs servers exposed to the internet) and download the backup file(s). Bizarrely, the QNAP NAS doesn’t come with an sftp client; instead they want you to use a package called QuFTP which only supports FTPS (which is less secure than sftp). It supports an sftp daemon (you can sftp into the NAS) – you’ll need to enable ssh access via the GUI and your user account in /etc/ssh/sshd_config (add it to the AllowUsers line). However, I don’t want the insecure machine to have ssh access to the secure machine and the sftp access seems problematic (it often drops the connection mid-transfer).

Drives

For the first time ever, I bought used (factory re-certified Seagate Exos) drives because they were considerably less expensive and RAID5 should provide protection against any one drive failing. They are warrantied for 5 years by the vendor and I plan to rotate them out for use as permanent backup drives before their end of warrany. I bought the drives from goharddrive.com for $130 each (total: $390 including tax, shipping, and 5 year warranty). The drives arrived well packed; I ran the full SMART test (which took 12+ hours) and they checked out OK; their date of manufacture was mid 2021 (about 3 years old). We’ll see.

The NAS (without drives) cost $469+tax from amazon on black Friday. The NVMe drive and extra 8GB were spare hardware I already had. The HDDs cost $390 so, in total, I spent $859 for this 25TB of RAID 5 storage. If the drives hold up well, I’ll order a few more for use as a cold spare and for backups.

DNS-320L
In 2024, I retired my beloved DNS-320L which was released in 2012 but is still perfectly usable (thanks to Alt-F firmware…see below). I’d installed two WD Red 4TB drives and still have loads of storage left over. It’s getting a little long-in-the-tooth, and the performance is a bit lacking (36MB/s read vs. theoretical 100MB/s maximum on GbE) but for most of my purposes it was still fine.

Alt-F Firmware
The DNS-320L is ancient and the software that it comes with is hopelessly out of date for a range of reasons. Fortunately, you can replace the stock firmware with the open-source Linux-based Alt-F firmware. This completely replaces the D-Link firmware and provides the core functionality you need (web interface, modern SAMBA file shares, etc.). The project is available from Sourceforge and offers good performance on a variety of older DLink NAS platforms (see performance comparison)

SFTP throughput
When copying files to the NAS via SFTP from my server nodes, I only get about 8MB/s throughput:

This is odd because when using SMB to transfer files from the NAS, I get a sustained 36-40 MB/s, so it appears to be a problem with the SFTP implementation.

Other NAS Resources

  • https://www.pcmag.com/picks/the-best-nas-network-attached-storage-devices
  • https://nascompares.com/
  • https://www.smallnetbuilder.com/tools/finders/nas/view/