I learned recently that DHCP lets you assign specific IP’s to certain MAC addresses.

Previously I had always used static IP’s for devices, but sometimes the UI for certain devices is kind of cumbersome. Using IR remotes or arrow keys on the side of some devices is no fun when you have a lot of them to do. The static IP’s also would need to be configured again if you needed to reset the device for whatever reason.

Set up

To set up DHCP just log into whatever device hosts your DHCP server, usually a router or a switch, and ask it to assign an IP to whatever the MAC address of your device is.

On most of my devices this change propagated to the devices almost instantly. On some others I had to ask them to renew their DHCP manually. And if the device was less important, I just waited for the DHCP lease time to run out for it to get the new address.

Notes

Obviously, this is only useful if you use the exact same hardware regularly. If you have to swap a device you will have to update the DHCP server again.

It also assumes you have a DHCP server. On some live events you end up with a cheap switch with no DHCP server on it. I suppose you could technically run a DHCP server on random machine, but I can’t think of a situation where that would be faster than just programming static IP’s.

Another disadvantage involves redundancy. For whatever reason my router does not have the ability to export the pre-assigned IP’s. So if the router randomly explodes, I don’t have an easy way to make the system redundant. Basically if I plug in a fresh router that isn’t configured yet, all the devices set to get addresses from DHCP will get NEW addresses, unless you hade preconfigured the second router with the DHCP assignments.

EDIT: I forgot my router has the ability to export all settings, but it doesn’t help much if your backup router is a different model. Still would be really nice to be able to export a table of the DHCP settings alone.

If you’re curious about how to make the system more robust, I’d ask the internet for advice on how to make redundant DHCP servers. It’s probably not going to be cheap though.

pi based file server

I needed a small file server with minimal power draw and data redundancy. Went with a Raspberry pi 4 and a couple small ssd’s. The pi4 can only output 1.2 amps of power. Since the drives draw 2 amps total, I got usb to sata adapters with a usb power splitter. All the devices receive their power directly from an anker 6 port usb power station.

OS and SSH

Went with ubuntu server for the OS. It’s proven reliable in the past and is well documented for what we are using the pi for. SSH is also already pre-configured so that’s already one less step than configuring Raspbian (pi’s default OS).

For reference the default user and pass in ubuntu server is:

ubuntu

It will prompt you change your password when you first log in.

Been using PuTTY on windows to connect to it and it has been working great.

ufw firewall

Ubuntu comes with the ufw firewall pre-installed. It’s nice and easy to configure. You have to enable the firewall manually though. Before you do so, you will have to add SSH to the firewall or you will lose your SSH connection:

sudo ufw allow ssh

Then enable the firewall with:

sudo ufw enable

Check firewall status with:

sudo ufw status

Keep in mind that this will allow SSH from anywhere. It would be more secure to restrict it to machines that you’re actually connecting from. Check this out for how to do that and more on firewall rules:

https://www.digitalocean.com/community/tutorials/ufw-essentials-common-firewall-rules-and-commands#service-ssh

Data Redundancy

There are a lot of solutions that exist to mirror your data between the two drives. The one I went with is called ZFS. It’s super easy to set up and I’ve been impressed with how robust it is.

I’ve been torturing ZFS by randomly unplugging drives, modifying data on the remaining drive, and then plugging the other drive back in to make sure it can bring both back online correctly. So far it has figured out everything automatically and quickly brings both drives back in sync.

I’m confident that it will withstand a disk failure.

Ideally there would be a second identical server set up for additional redundancy, but it’s not currently in the budget. With only one machine I’m currently a bit screwed if the power supply goes bad or if the pi crashes.

For setting up ZFS check out this guide:

https://ubuntu.com/tutorials/setup-zfs-storage-pool#1-overview

Basic File Sharing

If you just need a basic networked folder on your home network, it’s super easy to set up a samba server on ubuntu. This will allow windows machines to easily connect and access files:

https://ubuntu.com/tutorials/install-and-configure-samba#1-overview

Connection Speed

You might be wondering how fast you can access data on this disk. I used black magic speed test to test it:

speed test

EDIT: my original test showed 5.8 MB/s write and 1.8 MB/s read. After some troubleshooting, it turns out my WIFI was the bottleneck somehow (despite it usually being near 100MB/s). On ethernet it performs at 85MB/s write 95MB/s up. Which is considerably closer to the access speeds of the disks. Which run at 240 MB/s write and 844MB/s read on ZFS.

Advanced File Sharing

This brings me to what I’m actually using the pi for. I’m developing a project that requires moving audio and video files over the internet. Ideally this would entirely exist on the cloud, but to save a little money I’m self hosting an S3 bucket on the pi.

Why an S3 bucket? Because most cloud providers support the S3 api for reading and writing data. By making the local server S3, it allows the code I’m writing to easily transition to the cloud.

Since Amazon S3 is a bit expensive, I’m likely to use Backblaze when I fully transition to the cloud.

For local hosting I’m using a service called MinIO since it’s easy to set up. You can have it set up in minutes and start playing around with the S3 api on your local network in minutes. It’s really cool.

If your just playing around at home, follow this guide:

https://docs.min.io/docs/minio-quickstart-guide.html#gnulinux

If you plan on having it visible to the internet, this guide will help you be far more secure:

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-object-storage-server-using-minio-on-ubuntu-18-04

Regardless, the default user and pass for minio is:

minioadmin

Be sure to change it.

If you turned on the firewall you will have to add port 9000 (the default minio port) to ufw.

We’re starting to roll out some music videos, the first of which came out yesterday (Jan 17, 2021) https://youtu.be/UJk4JJz5OXI. The next two videos will roll out weekly on Sundays at 1200 CST (Jan 24th and Jan 31st 2021).

During this release process we ran into some issues with YouTube’s video compression that are relevant to other creators.

The problem and solution

1080p h.264 12Mbps before upload
1080p h.264 12Mbps upload –> YouTube AVC out

I always knew this segment at the end would cause a problem for compression. Moving particles and confetti like things are notorious for wrecking temporal video compression like h.264, but I didn’t anticipate that YouTube would handle it so poorly.

After some research I discovered that YouTube uses AVC compression for 1080p videos. If YouTube thinks your channel is important you get upgraded to VP9 compression on all videos. Otherwise, the only current solution to get upgraded to VP9 is to upload at least 2k video. I tested it with 1440p and 4k videos and was upgraded (as of January 2021). There seem to have been alternate ways to do this in the past, but none of them are still working to my knowledge.

More on AVC vs VP9

AVC seems to be the same thing as H.264, but YouTube must use a low bit rate because the output is garbage. VP9 on the other hand is an open compression format developed by Google to compete with H.265(HEVC). It’s likely YouTube defaults to AVC because it has a similar file size to VP9 at 1080p and below, and VP9 takes longer to encode than AVC. However, VP9 offers superior visual quality as shown below.

1440p h.264 60Mbps upload –> YouTube VP9 out

That’s the same file scaled up to 1440p (60Mbps) which triggers the VP9 encoding. It’s not perfect, but it is FAR better.

Overshoot recommended bitrate to further improve quality

It also seems that you should overshoot YouTube’s recommended bitrate as seen here: https://support.google.com/youtube/answer/1722171?hl=en#zippy=%2Cbitrate

My only real world test of this was on a 1080p h.264 60Mbps upload which only gets you the AVC codec. But you can see below that it looks even better than the original 12Mbps version I uploaded at the top of this post.

1080p h.264 60Mbps upload –> YouTube AVC out

Checking YouTube encoding settings

If you right click the YouTube video and click “stats for nerds” there is a section that tells you the codec used. If it says “VP09” next to “Codecs”, your video got upgraded.

stats youtube shows you

Upscaling Video

Since I wanted to actually increase the rendered pixels (instead of just interpolating pixels or using nearest neighbor), I tried AI upscaling.

Found a cool thing on GitHub that lets you use Waifu2x (for real, that’s what its called) or some other algorithms for upscaling video. It’s called Video 2X: https://github.com/k4yt3x/video2x

Unfortunately for this particular video it did a terrible job, but it usually works well on normal animation (waifu2x algorithm) or live action (SRMD or RealSR algorithms). Keep in mind that its encoding process TAKES FOREVER. Also note that Video 2X appears to be doing absolutely nothing for quite a while as it transcodes your movie file to a PNG sequence for upscaling each frame.

Waifu 2x butchering my child in UHD

The depth map image confused Waifu2X so I had to go back to the source files and re-render the depth maps at the highest resolution possible (which wasn’t very high because I quickly ran into VRAM limitations because I only have 6GB of VRAM right now)

2d to depth map

For those curious, the depth maps used in this video series are generated from normal camera footage using this machine learning model: http://stereo.jpn.org/jpn/stphmkr/google/indexe.html

Earlier this week we began posting Jam Sessions to our YouTube channel. It’s something Nessa and I wanted to start doing to get better at live improv and performance. It’s also learning effort to understand some of our hardware a bit better.

Nessa is more familiar with the hardware than I am though. Personally, I’m much more comfortable working in Ableton or another DAW, but this has been a fun experience so far.

One of the jams:

Setup Difficulties:

Currently, it is kind of a pain in the ass to sequence things. Right now the Drumbrute acts like the leader clock for the Moog and the Juno. Except the sequencer on both the Moog and the Juno are a bit annoying to work with.

Yesterday I rewired it so the Beatstep Pro is the leader clock for everything and I’m going to try out using its sequencer instead of the Juno or the Moog.

We also hijacked a table from another part of the house to allow us to have a proper keyboard for the juno. The most ideal setup would probably be the keystep pro instead of the beatstep (sequencing melodies on the drum pads is a little annoying), but that didn’t exist when we got the beatstep.