We’re using a custom version of the Knive Streaming Server (“It cuts, live”) to stream video off a Blackmagic ATEM TV Studio video switcher (using atemclient) to iOS and Mac clients with HTTP Live Streaming. Other clients might work as well, but support varies by version and generally is a major pain.
In the past, we ran the Knive and its ffmpeg processes in virtual machines together with the web server and components of the CDN on one or two physical machines. That proved to be somewhat unpredictable and unreliable when the live audience at the venue triggered a DDOS on that machine.
Portability of the streaming server was problematic in the past as well, as it requires installing a bunch of obscure python libraries and hand-tuning an ffmpeg build from source. Authentication to the stream and serving the m3u8 playlists runs on nodejs, and the media segments are served via Apache to the CDN. In front of these web services lives an instance of varnish to handle everything on port 80 and cache certain parts.
So, Docker comes to the rescue:
On our regular server, these four applications now already run dockerized. The streaming server lives in a hand-tuned image that cannot be built by a Dockerfile just yet. But it can be easily ‘docker save’d out to an archive and ‘docker load’ed on the new machine. It’s pretty large (2-3GB), but includes a lot of temporary cruft that could be removed. The standard packages for varnish, apache and nodejs build nicely with Dockerfiles.
Most of the configuration files for these images live in a git repository that can be cloned to the new machine. Its subdirectories can be mounted as docker volumes, e.g. /etc/apache2/ inside the container comes from the corresponding folder in the config repository: ~/docker/apache2/etc/apache2
The new machine: I chose DigitalOcean to host the stream, and it performed beautifully. Setting up the system only requires creating a new droplet (aka VM) from DO’s Docker on Ubuntu image. All the setup can be done on the smallest instance type ($5/month), and scaled up later for the event only.
Scaling droplets up and down at DigitalOcean comes with a couple of caveats. There are two ways of scaling up: Fast resize and migrate resize. To do a fast resize, halt the system and choose a new size from the web interface. If the machine that hosts your droplet has free capacity, you’ll get a fast resize that will increase CPU and RAM available to the droplet, but disk space will stay at the smaller initial level.
To migrate resize, you also halt the system first, then snapshot it in the web interface, then destroy the droplet and create a new one from the same snapshot. This will increase CPU, RAM and disk space. This will also make it impossible to later scale the droplet back down later, as DigitalOcean requires the disk space to match or to be smaller.
To scale down, you again halt the droplet, snapshot and destroy it and restore the image to a new droplet. If the disk size in the image is too large for the new droplet because of an earlier migrate resize, it won’t work, even if the image didn’t ever use the additional capacity.
Here’s the graph from DigitalOcean, on the 8-CPU box, encoding 4 quality-levels (720p, 360p, 2xx and audio-only) in real-time. First CPU peak is a test run, second bigger peak is the actual show with a little more incoming bandwidth and actual users on the stream.
As I did the migrate-resize before, I had to move the docker images once more to a smaller droplet after the live event to downsize, but this also worked beautifully.
If you want to play with DigitalOcean, use this link to sign up and receive $10 free credit, enough to run the smallest instance for two months.