For the last months, we have been working on upgrading the media player used by our streaming media platform. While our legacy media player was very advanced, it has been 18 minth since we last refreshed our players. As it always does, technology marches on, and we like to stay at the leading edge of the industry.
Our entire media distribution system is state of the art in every way with an advanced video infrastructure and feature set:
Our objectives were to make a better, faster and less expensive system to operate with:
We will be using the film festival to debut our new media player. Media players are the software code that is embedded in our web pages that control and display our videos. It is a critical piece of technology in our efforts to make our media universally accessible across a wide range of devices (desktops, notebooks, tablets, smartphones and smart TVs) and across a wide range of connections (from 3G phones to gigabit fiber-to-the-home connections.)
We use the licensed version of the JWPlayer Ad Player for our media player. We have used some version of the JWPlayer since we launched this website. We currently have been using Version 6 very successfully for the last 18 months. We have been testing in our labs the new Version 7 since its release. We have waited until our Winter/Spring term was over before installing the new player so as not to potentially be disruptive to students who are in courses. For the film festival, we have installed the production Version 7.4.4 and will be using it for all media shown in the film festival.
JWPlayer Version 7 is a major upgrade from Version 6 and represents a total re-write of its software code. It builds upon the rich feature set and reliability of its predecessor but provides additional speed (it loads 35% faster), even more stability, a cleaner architecture, additional device compatibility, additional format support (including MPEG-DASH and improved HTML5 experiences) and important new features (including CSS skinning.) The code re-write focused on fluidity and speed and resulted in a code base that's two-thirds the size it used to be.
We will collect data from the players' usage of Version 7 during the festival and analyze the results. We are expecting 3,000 to 5,000 visitors to view the long-form documentary (3 hours and 27 minutes runtime) during the festival. The resulting usage data this will give us a substantial experience base to consider before we fully deploy. We will use the results of the festival to tweak our implementation before our full rollout of the new player. In August, before the Fall/Winter Term begins, we will upgrade all players used throughout the site to use Version 7.
We set up a cloud-based storage system that is high-redundant and geographically distributed for fail-over protection. The system is highly survivable for protection against loss of a stored file. The storage system is also highly secure and can only be accessed by our administrators and the content delivery network (CDN). The storage system stores all of our master files as well as encoded distribution files.
When we finish editing a video, we made a high-bit-rate master file using an intermediate master file format. We upload the master file to our cloud-based storage system. These masters are very large files, and we use gigabit-per-second fiber connections to make the transfer. From the master repository, the master file is available to our cloud-based transcoder that is composed of thousands of individual encoder instances working in concert. The transcoder farm uses our master files as source material and encodes our Adaptive Streaming distribution and manifest files. The control of the transcoder farm derives from a set of encoding profiles that are highly technical and have been custom developed to meet our needs. Jobs are loaded into a production pipeline and queued for processing. As each job finishes, the encoded files and manifest files are written back to their assigned storage location in our storage cloud.
When each individual user requests to view a video, the content delivery network determines which of the CDN's node location would best serve the end user requesting the video. Currently, the CDN can choose any of our 53 node locations from all over the world. Once the serving node is selected, the CDN downloads the video to the serving node (at multi-gigabit-per-second core network speeds). The video files get cached in the node from our storage cloud and play them out to the end user requesting to see the video. The video files will live in the cache for 24 hours to serve any replay to the original requesting user and any other nearby user that come during the 24-hour period. This feature allows us to serve a large portion of the population of the planet as if they were all local.
The locations of our CDN nodes are:
The CDN not only allows the best possible performance, but it also allows extreme availability. In the unlikely event that any one or more of the 53 data centers where to totally fail, the other remaining nodes simply absorbs the load.
The resulting design also provides extreme scalability. Since the 53 data centers are distributed close to population centers and serve as forward-deployed caches, we can serve large numbers of users from each location. If one data center becomes too heavily loaded to provide optimal performance, the excess load is load-balanced to another data center. The total capacity of the system is extreme.
Without exaggeration, the CDN can serve millions of simultaneous users with a high degree of reliability and performance.
In addition to its core features, the CDN also allows us to stream both RTMP and HLS transport streams. Further, the CDN supports AES encryption technology and key management for all of our video streams.