I learned recently that DHCP lets you assign specific IP’s to certain MAC addresses.

Previously I had always used static IP’s for devices, but sometimes the UI for certain devices is kind of cumbersome. Using IR remotes or arrow keys on the side of some devices is no fun when you have a lot of them to do. The static IP’s also would need to be configured again if you needed to reset the device for whatever reason.

Set up

To set up DHCP just log into whatever device hosts your DHCP server, usually a router or a switch, and ask it to assign an IP to whatever the MAC address of your device is.

On most of my devices this change propagated to the devices almost instantly. On some others I had to ask them to renew their DHCP manually. And if the device was less important, I just waited for the DHCP lease time to run out for it to get the new address.

Notes

Obviously, this is only useful if you use the exact same hardware regularly. If you have to swap a device you will have to update the DHCP server again.

It also assumes you have a DHCP server. On some live events you end up with a cheap switch with no DHCP server on it. I suppose you could technically run a DHCP server on random machine, but I can’t think of a situation where that would be faster than just programming static IP’s.

Another disadvantage involves redundancy. For whatever reason my router does not have the ability to export the pre-assigned IP’s. So if the router randomly explodes, I don’t have an easy way to make the system redundant. Basically if I plug in a fresh router that isn’t configured yet, all the devices set to get addresses from DHCP will get NEW addresses, unless you hade preconfigured the second router with the DHCP assignments.

EDIT: I forgot my router has the ability to export all settings, but it doesn’t help much if your backup router is a different model. Still would be really nice to be able to export a table of the DHCP settings alone.

If you’re curious about how to make the system more robust, I’d ask the internet for advice on how to make redundant DHCP servers. It’s probably not going to be cheap though.

pi based file server

I needed a small file server with minimal power draw and data redundancy. Went with a Raspberry pi 4 and a couple small ssd’s. The pi4 can only output 1.2 amps of power. Since the drives draw 2 amps total, I got usb to sata adapters with a usb power splitter. All the devices receive their power directly from an anker 6 port usb power station.

OS and SSH

Went with ubuntu server for the OS. It’s proven reliable in the past and is well documented for what we are using the pi for. SSH is also already pre-configured so that’s already one less step than configuring Raspbian (pi’s default OS).

For reference the default user and pass in ubuntu server is:

ubuntu

It will prompt you change your password when you first log in.

Been using PuTTY on windows to connect to it and it has been working great.

ufw firewall

Ubuntu comes with the ufw firewall pre-installed. It’s nice and easy to configure. You have to enable the firewall manually though. Before you do so, you will have to add SSH to the firewall or you will lose your SSH connection:

sudo ufw allow ssh

Then enable the firewall with:

sudo ufw enable

Check firewall status with:

sudo ufw status

Keep in mind that this will allow SSH from anywhere. It would be more secure to restrict it to machines that you’re actually connecting from. Check this out for how to do that and more on firewall rules:

https://www.digitalocean.com/community/tutorials/ufw-essentials-common-firewall-rules-and-commands#service-ssh

Data Redundancy

There are a lot of solutions that exist to mirror your data between the two drives. The one I went with is called ZFS. It’s super easy to set up and I’ve been impressed with how robust it is.

I’ve been torturing ZFS by randomly unplugging drives, modifying data on the remaining drive, and then plugging the other drive back in to make sure it can bring both back online correctly. So far it has figured out everything automatically and quickly brings both drives back in sync.

I’m confident that it will withstand a disk failure.

Ideally there would be a second identical server set up for additional redundancy, but it’s not currently in the budget. With only one machine I’m currently a bit screwed if the power supply goes bad or if the pi crashes.

For setting up ZFS check out this guide:

https://ubuntu.com/tutorials/setup-zfs-storage-pool#1-overview

Basic File Sharing

If you just need a basic networked folder on your home network, it’s super easy to set up a samba server on ubuntu. This will allow windows machines to easily connect and access files:

https://ubuntu.com/tutorials/install-and-configure-samba#1-overview

Connection Speed

You might be wondering how fast you can access data on this disk. I used black magic speed test to test it:

speed test

EDIT: my original test showed 5.8 MB/s write and 1.8 MB/s read. After some troubleshooting, it turns out my WIFI was the bottleneck somehow (despite it usually being near 100MB/s). On ethernet it performs at 85MB/s write 95MB/s up. Which is considerably closer to the access speeds of the disks. Which run at 240 MB/s write and 844MB/s read on ZFS.

Advanced File Sharing

This brings me to what I’m actually using the pi for. I’m developing a project that requires moving audio and video files over the internet. Ideally this would entirely exist on the cloud, but to save a little money I’m self hosting an S3 bucket on the pi.

Why an S3 bucket? Because most cloud providers support the S3 api for reading and writing data. By making the local server S3, it allows the code I’m writing to easily transition to the cloud.

Since Amazon S3 is a bit expensive, I’m likely to use Backblaze when I fully transition to the cloud.

For local hosting I’m using a service called MinIO since it’s easy to set up. You can have it set up in minutes and start playing around with the S3 api on your local network in minutes. It’s really cool.

If your just playing around at home, follow this guide:

https://docs.min.io/docs/minio-quickstart-guide.html#gnulinux

If you plan on having it visible to the internet, this guide will help you be far more secure:

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-object-storage-server-using-minio-on-ubuntu-18-04

Regardless, the default user and pass for minio is:

minioadmin

Be sure to change it.

If you turned on the firewall you will have to add port 9000 (the default minio port) to ufw.