The Use Case
As part of the Pandemic a lot of us found themselves with a need to stream the output of a monitor to a client with low latency and manageable cost.
There are a few use models for live streaming that need to be separated:
The 1-to-many stream where scale is of the out most importance, but latency is acceptable as the viewers do not directly interact with the presenter and the time delay is not noticeable to the viewer. And with scale we reference hundreds to thousands of viewers around the world.
Then there is the 1-to-few stream where the viewers do have a timing reference and latency does matter. A common use case are conference calls like Zoom, Skype, etc. Because the viewers directly react and have two way audio communication either as part of the stream or in a parallel channel, any noticeable delay impacts experience. Many consumer grade solutions compensate for that with low resolution, low frame rate streams that don't burden encoders and network channels. It's acceptable to drop frames, and for the video and audio to flex within reason.
The more extreme case is the 1-to-1 (or 1-to-small number) stream where resolution, quality, and latency do matter. These exist in post production where the remote person observes color or edit changes and gives real-time feedback.
There are industry specific solutions, such as StreamBox and SohoNet ClearView that cater to this last use case. Sometimes they are cloud only solutions, or they come with dedicated hardware to handle the processing and encryption that may be required based on the use case. But they're extremely pricey, and not always in budget as this use case is now used on much smaller everyday projects, not only by higher budget studio productions.
Thus the quest of finding a solution that is practical, good enough for most use cases, and budget friendly.
The basic setup is a source signal, such as the loop-back output for a color reference monitor. An encoding application that can create a stream. And a player that the client(s) can view the stream on.
One issue is color space management. Especially if the receiving end is not a calibrated monitor. While the industry specific applications employ hardware decoders or custom apps that can manage color spaces, the simpler solutions will mostly rely on browser based playback. Thus the selection of browser and playback device has to be considered. It's generally accepted that a newer model iPad when configured right (disable TrueTone, set brightness to 50%) is 90% accurate in terms of color and a good approximation. The gap can be narrowed with a custom LUT that is inserted into the signal path via a LUT Box.
For the encoding application, there are many to chose from. Either a hardware appliance like the new ATEM Mini Pro, or any laptop that can run OBS, VMix, Wirecast along with an SDI encoder such as BMD MiniRecorder, DeckLink Card, or AJA I/O interface depending on the software. The end result is that the encoding appliance will create an RTMP stream that can be sent to a cloud relay.
For the player, there is a large number of players that can be embedded into a web page that can receive a stream and play it back. I prefer to take an embedded player and host it on a client page on my website to take advantage of branding and customer experience. I can add my own access control via client specific passwords to it to provide at least some amount of security (not end-to-end encryption as would be expected in high end projects). So a random person cannot stumble over the stream, as is the case with streaming setups that utilize YouTube (and the stream is left listed by accident). Self-hosting the player and stream also gets around the typical challenges of copyright scanners on the major video platforms.
Lastly there is what I call the Cloud Relay. Technically if we are building a 1:1 stream we could establish a peer-to-peer connection from the viewer to the system that does the encoding. In reality though the encoding system will sit on a local network behind a firewall and router and the internal IP address and port the encoder operates on will not be accessible to the outside world for good reason. There are ways to open select ports in the firewall to facilitate communication, and that is regularly done for virtual call solutions in Wirecast (Rendezvous) and VMix (VMix-Call). But it requires coordination or isn't always possible without the IT department involved in larger studios.
The alternative is to employ a small cloud based server that the up-link streams to, which can optionally perform some protocol trancoding and then serve as the distribution point for the down-link protocol that lives on a public facing IP address that is readily available to a browser hosting the player.
There are number of such solutions available from Wowza as part of the Streaming Cloud, from AntMedia.IO, Web Call Server 5, etc. They support different protocol combinations and pricing, so the choice will depend on the specific use case.
Before we get to the actual solution, we do need to understand some more details about protocols. There are multiple protocols to chose from and they have various pros and cons. And there are different protocols for the up-link and down-link at play.
The most well known up-link protocol is RTMP. It goes way back to the days of Macromedia (pre-Adobe) and Flash. It's old and not the fastest, but it is what the vast majority of YouTube, Facebook, and other live streams rely on for the uplink. It can also be used on the down-link if you simply need to ingest the feed into something else. It's not really usable as a down-link to a browser based player though, as it requires Flash for the player which is essentially extinct these days, or even if you do find it, definitely should not be used.
An alternative to RTMP that is newer and more suited is SRT. It's an open source video transport protocol specifically designed to run over unreliable networks for long-distance. SRT is becoming more widely available, but support and transcoding options are still hit or miss. OBS and VMix can stream in SRT, Wirecast not yet.
The most common down-link protocol used now is HLS (HTTP Live Streaming) made by Apple. It has the advantage of scale and that it can adapt to bandwidth and network conditions. Unfortunately it also comes with significant latency. That makes it a good choice for the 1-to-many distribution use case, but not for the use case we're solving for here.
Lastly there is WebRTC, another new open source protocol that provide web applications real time communication channels. WebRTC is quickly becoming the standard for low latency video applications.
Given this landscape, the ideal setup would be an SRT up-link to the Cloud Relay that then transcodes to WebRTC for the down-link. Unfortunately that is a rather elusive combo at the moment. Wowza Streaming Cloud supports SRT up-links, but only streams HLS, not WebRTC. Other solutions from AntMedia.IO (based on Red5 Pro) and Flashphoner's Web Call Server 5 can stream in WebRTC, but only take an RTMP input. That is the lesser of the evils choices, and the one employed here.
For two months I used a hosted instance of the AntMedia.IO Enterprise server. It's a small cloud based server that can relay RTMP:RTMP or RTMP:WebRTC, including the ability to create some lower-res stream to adapt to bandwidth. It was offered for $100/mo and included unlimited bandwidth. It worked quite well. But the cost is still on par with StreamBox, except that there is no bandwidth limitation. I found that I wasn't using the solution enough as I only needed it on an ad-hoc basis for specific clients to warrant the cost. So I continued the chase for something more flexible.
More flexible means running my own instance in AWS. There are several solutions available (including AntMedia.IO) in the AWS Marketplace, where the pricing for the software is $0.10 to $0.37/hr of uptime on your instance plus the AWS infrastructure cost. As it's straight forward to spin up an instance within a few minutes, the cost can be managed for less regular use.
When setting up my own AntMedia.IO instance I ran into some issue though getting SSL certificates setup which are required for WebRTC.
An alternate package I came across on the AWS Marketplace seems to work just as well and literally a few clicks to startup and is ready in less than 5 minutes to use: Web Call Server 5. The WCS5 package has some more interesting functions and makes it easier to deal with the SSL setup (see update below).
The process is quite straight forward (assuming you have some familiarity with AWS and EC2):
- Go to the AWS Marketplace and subscribe to the software
- (one time step): Setup a security group for WCS5 that opens a number of ports as described in the documentation
- Proceed to launch an EC2 instance (in my case I use a t2.large or t2.xlarge) and use the security group from the previous step
- After the instance has fully initialized simply go to the admin panel, which will be at https://<puplic ip>:8888 (IP address will be in the EC2 dashboard).
- Log into the admin panel with the 'admin' user and the instance ID (from the EC dashboard) as password
- Do a quick test that everything is working with the two-way-streaming demo
- Now configure the streaming application with RTMP up-link that should be https://<public ip>:1935/live and a 4 character streamkey of your choice (the admin panel will suggest some).
- Embed the player into your website (sample code below) and set the src: attribute to the public ip address of your instance.
- Send the client the link to your embedded page (plus any passwords & access instructions). They need to hit play to start the stream. The embedded player comes with a full-screen option. Ideally the client does this on an iPad Pro, goes full-screen and has full access to the stream there in reasonably accurate color.
In my experiments I've seen latency in the 1s range. It may vary based on network conditions. though.
Once you're finished, don't forget to shutdown the EC2 instance.
Here's the embed code I use for the player. There are also non-iframe options in the documntation.
marginwidth='0' marginheight='0' frameborder='0' width='1280' height='720'
scrolling='no' allowfullscreen='allowfullscreen'> </iframe>
Update on SSL Needs [6/25]
After further testing it turns out WCS5 also requires an SSL certificate to be installed to work properly. My initial testing was on a system where I had logged into the admin panel and accepted a self-signed cert exception, thus allowing it to run without proper certificate. In some cases that may be a viable work-around with a more tech savvy viewer.
But the proper procedure is to add the SSL certificate. It's an extra set of steps, but once established just adds a few more minutes to the setup process.
I setup a subdomain (live.janklier.com) that I purchased an SSL cert for from a provider that allows unlimited re-keying/host changes. That's important, as the EC2 hosts will lose their SSL cert when being terminated. Having the SSL attached and validated on the subdomain allows later instances to be handled with no extra cost.
The process for attaching the SSL cert for WCS5 goes as follows:
- Log into your DNS editor and modify the zone file to point the live. subdomain at the public IP address of the new instance
- Log into the EC2 instance via ssh (using Putty and the established EC2 key pair).
- run the command 'sudo openssl genrsa -out live.pk' to generate a new private key and copy/pasted from the file it into a local file for later (or sftp it to your local system)
- run the command 'sudo openssl req -new -key live.pk -out csr.pem' to create a new CSR (signing request). Copy contents of file into clip board
- Go to your SSL provider (I used ssl.com) and submit the CSR content (one-time step may require domain validation).
- Download the new certificate to your local system.
- Go to the WCS5 dashboard, there is a menu for certificates. Upload the certificate file and the private key file.
And of course in the code samples above and stream URL, replace the public IP with the signed sub domain (one-time step).
In my case my domain is hosted on my own WPS and runs its own name servers. So zone file edits propagate pretty quick. If the subdomain IP change takes longer to propagate in your DNS setup, you may have to spin up the instance with a bit more leadtime.
One aspect I'm not clear on yet, whether a SSL certificate will survive an instance stoppage (as opposed to termination). That may be a way of reducing cost without having to redo the SSL every time. I will experiment and update the notes.
Lastly, some people attach their SSL to an AWS VIP (load balancer) rather than the EC2 instance. You can leave the load balancer up and running at a lower cost than the EC2 instance.
After a few days of playing with instances on and off, my total cost in the AWS dashboard has been $4.28, though that does include some unrelated data transfer costs I have from S3.
Update on Instance Management [6/27]
With the added overhead of the SSL certificate, I was curious if there's a balance between cost and overhead by stopping rather than terminating the instance. I did stop the instance late at night and restarted it the following morning. Upon restart of an EC2 instance the public IP changes, but everything else remains. It turns out the SSL certificate survives that and continues to work. After re-starting the subdomain A record has to be updated to reflect the new public IP, and there may be a small propagation delay.
There may be a solution to the IP address change, by employing an Elastic IP address that would remain during a stopped instance and a very nominal charge of $0.005/hr for the convenience of avoiding the propagation delay after a restart. I will continue to play with that aspect.
That is very manageable. In stopped state the instance does not incur any CPU and software cost. Only a much smaller cost goes to maintaining the EBS volume where the files are stored.
So it seems the right path is to launch a new instance ahead of a set of review sessions. And then to temporarily stop it when not in use until the set of sessions is complete, at which point the instance may be fully terminated until the next time a project comes around.
Update from a few weeks later [9/4]
I've used this solution now repeatedly with good success. I've ended up switching back to AntMedia.IO because it's user interface is a lot more concise. Once past the details on how to set up the SSL certificate, it's straightforward.
I've found that the instance can be started and stopped at random even with long breaks. Keeping it configured but stopped only incurs minimal storage cost for the main disk storage and eliminates the need to re-key the certificate.
I also setup a separate domain when I'm working on productions for other studios where I can temporarily brand it with the client's logo and a fun but unbranded custom domain.