For the last several months, we have been working on a major upgrade of our streaming media platform. While our legacy platform was very advanced, it has been two years since we last refreshed our system. As it always does, technology marches on, and we like to stay at the leading edge of the industry.
This upgrade is a top-to-bottom total rework of our video infrastructure and feature set:
Our objectives were to make a better, faster and less expensive system to operate with:
We set up a cloud-based storage system that is high-redundant and geographically distributed for fail-over protection. The system is highly survivable for protection against loss of a stored file. The storage system is also highly secure and can only be accessed by our administrators and the content delivery network (CDN). The storage system stores all of our master files as well as encoded distribution files.
When we finish editing a video, we made a high-bit-rate master file using an intermediate master file format. We upload the master file to our cloud-based storage system. These masters are very large files, and we use gigabit-per-second fiber connections to make the transfer. From the master repository, the master file is available to our cloud-based transcoder that is composed of thousands of individual encoder instances working in concert. The transcoder farm uses our master files as source material and encodes our Adaptive Streaming distribution and manifest files. The control of the transcoder farm derives from a set of encoding profiles that are highly technical and have been custom developed to meet our needs. Jobs are loaded into a production pipeline and queued for processing. As each job finishes, the encoded files and manifest files are written back to their assigned storage location in our storage cloud.
When each individual user requests to view a video, the content delivery network determines which of the CDN's node location would best serve the end user requesting the video. Currently, the CDN can choose any of our 53 node locations from all over the world. Once the serving node is selected, the CDN downloads the video to the serving node (at multi-gigabit-per-second core network speeds). The video files get cached in the node from our storage cloud and play them out to the end user requesting to see the video. The video files will live in the cache for 24 hours to serve any replay to the original requesting user and any other nearby user that come during the 24-hour period. This feature allows us to serve a large portion of the population of the planet as if they were all local.
The locations of our CDN nodes are:
The CDN not only allows the best possible performance, but it also allows extreme availability. In the unlikely event that any one or more of the 53 data centers where to totally fail, the other remaining nodes simply absorbs the load.
The resulting design also provides extreme scalability. Since the 53 data centers are distributed close to population centers and serve as forward-deployed caches, we can serve large numbers of users from each location. If one data center becomes too heavily loaded to provide optimal performance, the excess load is load-balanced to another data center. The total capacity of the system is extreme.
Without exaggeration, the CDN can serve millions of simultaneous users with a high degree of reliability and performance.
In addition to its core features, the CDN also allows us to stream both RTMP and HLS transport streams. Further, the CDN supports AES encryption technology and key management for all of our video streams.
Each video or playlist is assigned to an advanced media player that is embedded into every web page that contains a video to play. The media player code and all of its supporting files are also served from the CDN just like media so the page loads with the best possible speed and scalability.
The embedded player contains the user interface and the programming to exploit all of the features and functions of our infrastructure. Each user interacts with the embedded player through simple and intuitive controls and the embedded player, in turn, interacts with all of the infrastructures to deliver and control the video and the provided features and functions.
We have used earlier-generation players from this same vendor but in this upgrade we have migrated to the latest generation player and moved up to the maximum number of features provided by this player.
In this stage of the upgrade, we will implement most of the new feature, saving only a few for later in the year.
We performed this upgrade in three planned phases beginning in the last week of November, 2014 and completed the first two phases by the middle of January, 2015. The first two phases dealt almost exclusively with infrastructure, and both phases are in full production now. We are now serving all video traffic for this website from the new infrastructure. Phase 3 began during the first week in February, completed within a couple of weeks and is in final cross-device testing now. Phase 3 dealt with our new embedded media player and includes all of our new features and functions that will be available to our end users. As the user interface and the visible face of the platform, Phase 3 is far more visible to end users and ties everything together into a fully integrated system.
With the system completed and tested, it is time to unleash it for the world to see and use. Our film festival will be the public debut of the full platform and its first large-scale usage load.