Subversion Security for Teams

I’ve already ranted about why I still use SVN instead of Git. That said, it isn’t always obvious how to set up an SVN server securely, especially if you want fine-grained access control so certain users only get access to certain repositories or projects within repositories. It’s actually pretty easy with a few clever tricks.

My Requirements

  • Lightweight server – you shouldn’t need a lot of resources. In general, I run subversion in a proxmox VM with 2GB of RAM allocated and 1 processor core. I use minimal Ubuntu server as the OS.
  • Server access must only be via ssh. These days, nothing should be accessible over the internet that doesn’t use ssh and public key security.
  • The server should support multiple repositories.
  • You should be able to restrict user access to certain repositories and also specific projects (folders) within a given repository.
  • Disaster recovery should be easy.

How To Do It

1. Setup the server VM

  • Download the Ubuntu Server iso and store it in the /var/lib/vz/template/iso folder of your Proxmox host machine (if you don’t use proxmox, you probably should).
  • Create a Proxmox VM with 2GB RAM, 32GB disk space, and 1 CPU core which should be sufficient and select the Ubuntu ISO.
  • During installation, select the minimal install; it has nearly everything you’ll want and installing more things is easy using apt.
  • During installation, enable openssh access and create your superuser account.
  • Once installed update and upgrade as usual.
  • Install your public key in your home/myUser/.ssh/authorized_keys file. If you’re not familiar with how to do this, see here.
  • Once you have confirmed that your public key login is working, disable root login and password access in /etc/ssh/sshd_config.

2. Install Subversion and tools

  • Install subversion from the ubuntu repo
    sudo apt-get install subversion
  • Install your favorite editor any other tools you might want (vim, iputils-ping, etc.)

3. Create Subversion User and Repository

  • Create a user named “svn”
    sudo adduser svn
  • Create a folder to hold your SVN repositories:
    sudo mkdir /srv/svn
  • Assign ownership of the SVN repository to the svn user
    sudo chown svn:svn /srv/svn
  • Create an SVN repository:
    sudo svnadmin create /srv/svn/myRepo
  • Make sure the entire new repo is owned by the svn user:
    sudo chown -R svn:svn /srv/svn/myRepo

4. Setup SSH access

  • Create folder /home/svn/.ssh (should have 700 privilege)
  • Create file /home/svn/.ssh/authorized_keys (should have 600 privilege)
  • Add the SSH public key(s) for your workstation(s) to the authorized_keys file.
  • In front of each key add (so this and the key are all on one line):
    command=”svnserve -r /srv/svn/myRepo -t –tunnel-user=myUserName”
    An example of two lines in the file might be:
    command=”svnserve -r /srv/svn/ -t –tunnel-user=bob” ecdsa-sha2-nistp521 <Bob’s ECC public key>== BobLaptop1
    command=”svnserve -r /srv/svn/myRepo -t –tunnel-user=alice” ssh-rsa <Alice’s public RSA key>== AliceContractorPC
  • You will probably want other options in addition to the command option to keep things locked down. All options should be comma-separated with no spaces unless within double-quotes; options you might want after the command option might include:
    ,no-port-forwarding,no-agent-forwarding,no-pty,no-X11-forwarding
    So a full line for alice might look like:
command="svnserve -r /srv/svn/myRepo -t --tunnel-user=alice",no-port-forwarding,no-agent-forwarding,no-pty,no-X11-forwarding <Alice's public RSA key>== AliceContractorPC

5. Setup SVN fine-grained User Authorization

  • Edit /srv/svn/myRepo/conf/svnserve.conf
    anon-access=none
    auth-access=write
    authz-db = authz
  • Edit /srv/svn/myRepo/conf/
    [/]
    unrestrictedUserName = rw
    [/projectA]
    restrictedUserName = rw
  • This sets up an unrestricted user with complete access to all projects in myRepo
    and a restricted user who only has access to projectA within my Repo
    SVN authz supports groups for very flexible permissions:
    [groups]
    myGroup = userA, userB
    [/projectB]
    @myGroup = rw

Users will always access the server as user svn. The trick is in the authorized_keys file which will choose the right SVN username based on the public key that matched your login. So when Bob logs in, he will use his private key (which will match his public key in the authorized_keys file which will also assign his –tunnel-user name to “bob” (for example). When Alice logs in, she will use her private key (which will match her public key in the authorized_keys file which will also assign her –tunnel-user name to “alice” (for example). SVN will further restrict their access within each repository according to their tunnel-user name in the authz file(s).

Some nifty things to note:

  • You don’t have to create linux login users for bob and alice. In fact the only users who should have login shell access are you (your super-user account) and svn and they should only have public key access (no password login should be allowed).
  • When bob and/or alice login as the svn user, they only get access to SVN
    (the only command they can run is svnserve per the authorized_keys file).
  • You can restrict bob or alice to a specific repo in the authorized_keys file by setting their svn “root” using the -r parameter. So, for example for alice:
    command=”svnserve -r /srv/svn/myRepo -t –tunnel-name=alice”
    Alice is then restricted to the myRepo repository and may be further restricted by the authz file for that repository based on her tunnel-name to limit her to specific projects. She will not even know that there are other repos; all references to projects will be relative to the root, so for example she might checkout:
    svn+ssh://svn@myhost.mydomain.com/myProject
    Where myProject is in myRepo and Alice would not know about myRepo2.
  • You can implement fine-grained restrictions within each repository using the authz mechanism. You can read about that in detail here.
  • To access the server (e.g. from TortoiseSVN or your favorite SVN client), you access using the svn+ssh protocol with a URL like this:
    svn+ssh://svn@myhost.mydomain.com
    Your private key will grant you access as the svn user and determine which user SVN treats you as for security purposes.

Backups

Although you can always backup your repo(s) using the svn dump facility:
svnadmin dump /srv/svn/myRepo | bzip2 > myRepo_dump-$(date +%Y%m%d).svn.bz2
That only backs up the repository itself; it doesn’t backup all the work you did above to setup logins and permissions.

IMO, it’s much better for disaster recovery to simply backup your entire SVN server using proxmox. That way, you can stand up a new proxmox machine, restore the backup, and you’re up and running in a few minutes!

QNAP NAS

I replaced my ancient (but working) DLink NAS with a much newer and faster QNAP NAS (Model TS-464-8G). The QNAP hardware is nice: a compact package supports 4 SATA drives in a variety of RAID configurations, 2x NVMe drives, 2x 2.5GbE ports with option to add a 10GbE card, has a slick web-based user interface, and consumes relatively little power. It runs a custom linux on an a Celeron N5095.
I don’t like the custom linux.

UPS Support

Naturally, I want my data storage to be protected by a UPS and to automatically and safely shut down before the UPS battery is exhausted if there is an extended power outage. I use a CyberPower CP1500PFCLCD UPS (which I am very happy with so far) to protect several NUC servers, an L2 switch, and the NAS. The UPS is connected to one of the NUC proxmox servers via USB. I run NUT on that server, including the nut-server that allows other machines (such as the NAS) to access the UPS over the network as nut-clients. The problem is that QNAP makes this more difficult than it has to be. They support nut (which is nice), but they seem to have done so mainly to allow one QNAP NAS to access another QNAP NAS connected to the UPS.

This is what I had to do to get the QNAP NAS to run as a generic nut client:

  • Control Panel -> External Device -> UPS Tab
  • Select Network UPS Slave
  • Enter the IP address of your nut server
  • Apply changes
  • Reset the NAS to start the upsutil (nut-client daemon) running

How did the NAS get the NUT UPS name, user name, and password used on the nut-server? It didn’t; the NUT support from QNAP hard-coded them as ‘qnapups’, ‘admin’, ‘123456’. And folks wonder why QNAP has had security issues.

You can change the user name and password by enabling the admin user, logging into the NAS via ssh as the admin user, and editing /etc/config/ups/upsmon.conf (make a .orig copy first). Find the line that reads:
MONITOR qnapups@myNutServerIp 1 admin 123456 slave
and replace ‘admin’ and ‘123456’ with the user name and password you have assigned for slave devices on your nut server in /etc/nut/upsd.users

Unfortunately, QNAP doesn’t let you change the UPS name; it *must* be qnapups
Fortunately, NUT provides a workaround for this that doesn’t require changing all the other nut clients. On your nut server, edit your /etc/nut/ups.conf file and add a new dummy UPS named qnapups that points back to your real UPS. For example, my ups.conf ends with:

[cp1500]
    driver = usbhid-ups
    port = auto
    desc = "CyberPower CP1500PFCLCDa"
    vendorid = 0704
    productid = 0601

[qnapups]
    driver = dummy-ups
    port = cp1500@myNutServerIpAddress
    desc = "Proxy UPS for QNAP NAS"

Restart the nut-server (sudo service nut-server restart) and voila your QNAP can then see the UPS:

Proxmox VMs

It takes a lot of time to setup a server and then it must be maintained including regular backups. Virtualization can help with a lot of this. Modern computers have lots of cores, memory, and disk space so it is now possible to run multiple servers as virtual machines within a single physical server. This arrangement offers lots of advantages including:

  • Use resources efficiently – many servers only use a small fraction of the physical machines capability so you can easily run quite a few virtual servers on one physical machine.
  • Keeping servers and their environments separate helps avoid conflicts
  • Easily perform “bare metal” backups of virtual servers and restore them to the same or a new physical server for quick disaster recovery.
  • Easily allocate and expand resources (within the limits of the physical server)

I generally run my home servers on Intel NUC platforms because they offer a nice balance of computing power and power efficiency. A basic NUC 12 Pro with an i5-1240P or higher processor has at least 12 cores, up to 64GB of RAM, a fast NVMe gen 4 drive, 2.5GbE, and a TDP of only 28W. For bulk storage, you can use a NAS or connect a DAS via USB3.2 for very high speed. They stack, they’re small, quiet, and the low power consumption means a typical UPS will carry them through most outages. In short, they’re great little servers.

For virtualization, I like Proxmox. Proxmox is Debian based; it installs quickly from a USB flash drive and provides a friendly web-based management interface that is exactly what’s needed. It allows you to see the status and manage both the physical server and the VMs. It has a tightly integrated KVM hypervisor so you can access the console of each VM and the physical server remotely via the web interface.

Proxmox also makes it easy to make “bare-metal” backups which take a snapshot of the entire VM that can be easily restored in case of disaster either on the same physical server or a new server. The backup files are sparse and compressed; a machine with 64GB of storage that is using 24GB will yield a snapshot file ~12.5GB. You can download the snapshots and store them on bulk storage and off-site. The fact that Proxmox is so easy to install and that you can then restore entire VM snapshots quickly means that even if the physical server and/or storage failed completely, you can be back up and running on a new machine in less than 30 minutes. (I have done this)

Many servers don’t need a lot of compute power; most of mine do just fine with 4-8GB of RAM 2-4 cores, and 32-64GB of storage. This means I can host several servers on a NUC12 without it breaking a sweat. Keeping servers separate (e.g. database, middleware, web applications, etc.) makes it easy to scale and to upgrade individual servers without software or hardware conflict with others.

Update Sept 2024:
I have been using Proxmox for a few months now and still like it a lot; it simply works. The web-based management is perfect – exactly what’s needed for managing VMs with excellent status-at-a-glance and detailed configuration pages. I have resized VM hardware allocations on the fly several times and the interface is fast and intuitive. The bare-virtual-metal backups are easy to do and give me confidence about disaster recovery.

The only thing I think proxmox should change is their lowest tier (community) pricing/model:

  • I hate renting software; if they simply charged a fixed price with optional annual maintenance, most serious users (including me) would likely end up buying the maintenance anyway since nobody wants to run a server without the latest security fixes. However, I don’t want that decision forced on me; I like to own not rent.
  • The per-socket pricing effectively forces certain hardware choices: in particular, a single high-power server over a few low-power machines (like NUCs) that have been re-purposed. This removes a certain level of flexibility (the opposite of everything else about proxmox). For the community model, a site-license that covered a small number of machines or some aggregate level of computing power might make more sense.
  • Finally, $110/year/socket is well above the no-brainer cost for most users. I’d bet proxmox would net significantly more revenue if they lowered their community pricing to the ubiquitous $99 purchase + $49/year optional maintenance.

Network Attached Storage (NAS)

Everyone uses cloud storage these days, but I still find local storage useful, especially for large files like photos, videos, music and such. For local storage, I use network attached storage (NAS): a black box with redundant hard drives that is connected to my network. Anyone on the network can access the storage (assuming they have the appropriate permissions).

The NAS box should have at least 2 drives configured as RAID1 or 3 or more drives configured as RAID5 so that there is redundancy: if a hard drive fails (and it will), the information is mirrored on the other drive(s); this allows you to replace the failed drive with no loss of data. The NAS is always online making it a convenient place to backup the drives of desktops/laptops.

Note: it’s important to use hard drives designed for NAS storage (always on) such as the Western Digital Red NAS series or Seagate Exos series.

Although you can make any Linux computer a NAS, I’ve found dedicated NAS boxes to be very useful; they typically use little power, take little space, are quiet, and are meant to operate continuously for years. I’ve had quite a few NAS boxes made by D-Link starting with their DNS-321 and have now moved to a QNAP NAS.

QNAP TS-464

As of November 2024, I moved to using a QNAP TS-464-8G (see: datasheet). There was nothing wrong with the old DNS-320L which is still an amazing NAS, but I wanted something faster. The TS-464 is 10 years newer and is simply better in every dimension:

  • Celeron N5095 quad-core up to 2.9GHz
  • 8GB DDR4 RAM (expandable to 16-64GB)
  • Dual 2.5GbE Ethernet (can support 10GbE via expansion card)
  • Dual USB 3.2 Gen 2 (10Gbps)
  • 4x HDD bays
  • 2x NVMe slots
  • Advanced software including automatic snapshots
  • NASCompares Reviews: 2022, 2024
  • Should reach 280MB/s+ using a single 2.5GbE port
    (with a 2.5GbE switch)

I’ve outfitted the NAS with:

  • 16GB of DDR4 RAM (it came with 8GB and I added a spare 8GB SO-DIMM)
  • 3x Seagate Exos X16 ST14000NM005G 14TB hard disk drives
    (configured as RAID5 yielding roughly 25TB of net storage)
  • A spare Samsung Evo 970 Plus 1TB NVMe drive

  • Poor support for SFTP. although QNAP runs a custom linux variant, it is somewhat locked down and is missing some surprising things. For example, I use proxmox VMs for most of my server functions (and even some virtual workstations). Proxmox is awesome. Among other things, it makes it easy to make backups of any VM that you can easily restore, even to another machine (backups live in /var/lib/vz/dump). With my old NAS, I would create the VM backup(s) and have the NAS (which is in my secure network) sftp to the proxmox VM (which might be outside my secure network if it runs servers exposed to the internet) and download the backup file(s). Bizarrely, the QNAP NAS doesn’t come with an sftp client; instead they want you to use a package called QuFTP which only supports FTPS (which is less secure than sftp). It supports an sftp daemon (you can sftp into the NAS) – you’ll need to enable ssh access via the GUI and your user account in /etc/ssh/sshd_config (add it to the AllowUsers line). However, I don’t want the insecure machine to have ssh access to the secure machine and the sftp access seems problematic (it often drops the connection mid-transfer).

Drives

For the first time ever, I bought used (factory re-certified Seagate Exos) drives because they were considerably less expensive and RAID5 should provide protection against any one drive failing. They are warrantied for 5 years by the vendor and I plan to rotate them out for use as permanent backup drives before their end of warrany. I bought the drives from goharddrive.com for $130 each (total: $390 including tax, shipping, and 5 year warranty). The drives arrived well packed; I ran the full SMART test (which took 12+ hours) and they checked out OK; their date of manufacture was mid 2021 (about 3 years old). We’ll see.

The NAS (without drives) cost $469+tax from amazon on black Friday. The NVMe drive and extra 8GB were spare hardware I already had. The HDDs cost $390 so, in total, I spent $859 for this 25TB of RAID 5 storage. If the drives hold up well, I’ll order a few more for use as a cold spare and for backups.

DNS-320L
In 2024, I retired my beloved DNS-320L which was released in 2012 but is still perfectly usable (thanks to Alt-F firmware…see below). I’d installed two WD Red 4TB drives and still have loads of storage left over. It’s getting a little long-in-the-tooth, and the performance is a bit lacking (36MB/s read vs. theoretical 100MB/s maximum on GbE) but for most of my purposes it was still fine.

Alt-F Firmware
The DNS-320L is ancient and the software that it comes with is hopelessly out of date for a range of reasons. Fortunately, you can replace the stock firmware with the open-source Linux-based Alt-F firmware. This completely replaces the D-Link firmware and provides the core functionality you need (web interface, modern SAMBA file shares, etc.). The project is available from Sourceforge and offers good performance on a variety of older DLink NAS platforms (see performance comparison)

SFTP throughput
When copying files to the NAS via SFTP from my server nodes, I only get about 8MB/s throughput:

This is odd because when using SMB to transfer files from the NAS, I get a sustained 36-40 MB/s, so it appears to be a problem with the SFTP implementation.

Other NAS Resources

  • https://www.pcmag.com/picks/the-best-nas-network-attached-storage-devices
  • https://nascompares.com/
  • https://www.smallnetbuilder.com/tools/finders/nas/view/