Imagine you run an online comedy platform, a destination for in-the-moment, live humor video streams. Your users are individuals who find humor in everyday situations out on the street, in the office, at a restaurant…you name it, they find a way to capture it.
When your users spot a laugh-worthy situation, they expect to start streaming live using their mobile devices within seconds of taking their phones out of their pockets, or risk losing the opportunity to stream the moment.
Unfortunately, imagine your app takes 45 seconds on average to allow a user to start live streaming, as your transcoding provider needs the time to “boot up”. The start-up time delay causes your users to miss streaming humorous events, which in turn results in you losing users daily.
When it comes to time-sensitive live events, this transcoding session startup delay is a dealbreaker.
Many live streaming platforms need to get content out immediately to reduce video latency, but starting a new livestream can be a time consuming process.
Though latency can accrue at all different steps of the live streaming pipeline, including the video encoding pipeline, ingest operations, packaging operations, broadband transport speed, content delivery network (CDN), and video player, the process of starting a video’s encoding session can be one of the biggest contributors to end-to-end latency.
Typically, an encoding or transcoding session begins with an API call to a transcoding provider of choice. Even if one has programmatically configured this API call, and it happens instantaneously as the live video starts to be transmitted, it can take up to 40 seconds for encoding providers to assign the video to a working transcoder. The result - a video starting with 40 seconds of delay, or more, if we include latency added in other portions of the live video streaming workflow.
The reason for this delay can be largely attributed to transcoding server “boot up” time. Typically, transcoding providers use Google Cloud or AWS to run their encoding software and pay for virtual server space from said companies only when necessary. This translates to an infrastructure that “boots up” only when a transcoding request comes in, a process that involves setting up internal servers from a “server image” or a copy of the server with all necessary settings, data files, and application configurations to transcode your video.
An imperfect solution to this problem is to pay for and reserve as many servers as you think you might need 24/7. This is not recommended, as it would be a lot more expensive and a waste of computational resources.
Thanks to Livepeer’s global GPU infrastructure, reserving server space 24/7 for video transcoding is not a solution anyone needs to resort to anymore.
Livepeer leverages a readily available global network of always-on, incentivized transcoding providers that eliminate server boot-up latency from the video transcoding pipeline. Transcoding begins the moment a request hits our API, as there are always transcoders waiting and ready to take on transcoding “jobs.”
How are we able to provide instant, pay-as-you go transcoding?
Livepeer has discovered a new, underutilized source of transcoding capacity - cryptocurrency mining equipment. There are thousands of “mining farms” distributed all around the world, each equipped with thousands, or hundreds of thousands, of GPUs typically used for cryptocurrency mining. Within every one of these GPUs exists a video encoder card not currently being used.
Livepeer makes it possible for these data center operators to put their video encoder cards to use and turn on a lucrative new revenue stream with no additional cost or overhead. This new zero-cost revenue stream incentivizes these data center operators, or as they are more colloquially called, miners, to join the Livepeer transcoder network and provide excellent transcoding services to those who request them.
Thanks to this distributed network of always-on transcoders, Livepeer can transcode video segments much faster than realtime in just a few hundred milliseconds. Intelligent segment distribution and redundancies across the Livepeer network, in turn, ensure elegant fall back options in case of bad transcoder performance. In short, transcoding never stops.
Since Livepeer does not compete for the same cloud computing resources as Amazon or other transcoding providers, it can offer unique, cost-effective and flexible pricing based on transcoding usage, not on reserved server space. Cloud provider costs just can’t compete.
For more information on always-on transcoder access, our unique transcoding services, or any other questions you might have, send us a note to [email protected].
Growing platforms for performers, influencers, gamers, and many others looking to grow their brand on social media, tend to support a large number of live streams at a time, which often translates to unsustainable transcoding costs. They end up reliant on existing cost-prohibitive streaming platforms to fulfill increases in live streaming demand. When it has been shown that 80% of consumers prefer to watch a video than read a blog and 63% of 18-34 year olds watch live streaming content regularly, it is safe to assume that an investment in lower cost live streaming infrastructure can pay off exponentially in the future.
The Livepeer streaming API is powered by a network with uniquely distributed architectural design. At the core of this network is a live ingest and transcoding engine that tries to maintain a robust live streaming workflow in the face of networking issues or hardware failures. This helps the Livepeer infrastructure scale in unique ways.