On Monday morning, Lisa watched TikTok on the subway to pass time on the way to work. During her lunch break, she watched a few funny videos on YouTube. Later that night, she turned on the TV at home and searched for the latest releases from Netflix and Hulu.
It’s no secret that most of us spend a great deal of time enjoying audio and video applications daily. However, there is something about the consumption of this type of media that you might not know.
Why Is PPIO Choosing To Shake Up The Audio and Video Industry?
The 2018 “Global Internet Phenomena Report” by Sandvine revealed that the traffic generated by video applications accounted for 58% of total Internet traffic. Also noted was that the role of video applications in driving global internet traffic has grown at an unprecedented rate.
Our platform, PPIO, is a decentralized storage and delivery platform aimed at developers so they have a cheaper, faster, and a more private way to handle their data. I highly suggest checking our official website for a deeper look at what we do.
When it comes to designing PPIO, we regard the audio and video component as our top priority. Our focus means that we not only do we want to successfully support mainstream audio and video transmission protocols but also to ensure the highest quality of service (QoS). In order to have a better understanding of our audio and video data delivery mechanisms, let’s briefly recap the commercial architecture of PPIO.
PPIO will provide 3 sets of APIs:
- IaaS layer-based storage space and bandwidth lease API
- PaaS-based POSS, PCDN, and PRoute API
- Application Services layer-based application APIs that support video-on-demand fog, live streaming fog, image fog, and more.
Developers can choose to develop at any level to make their own App or DApp.
Here are the key differences when comparing PPIO with AWS cloud computing services.
In the PPIO architecture, the API for audio and video streaming is in the Application Services layer which is specifically designed for application development (whereas, PCDN is based on the PaaS layer). In the next section, I will discuss the video streaming and the technology used by PPIO for this.
CDN and PCDN
CDN (Content Delivery Network) is a type of data delivery network, and one of the core infrastructures of the modern Internet. A CDN node is the closest node to the user in the architecture of the entire CDN. The CDN architecture was originally designed for storing data in the center, but centralized data does not mean it’s going to be stored in the closest node to each user. As a result, the CDN architecture deploys CDN nodes on the edge of networks in metropolitan areas. These nodes are used to cache data. When users request data, these nodes can provide the data quickly and directly to ensure an excellent user experience. The figure below shows the architecture of the CDN node.
CDN technology has been developed for many years, and there are many companies engaged in the business of CDN, which as a result means that many large-scale commercial applications have emerged that use CDN. At the end of 2018, the global CDN market reached a total value of $20 billion US dollars, and the market scale is huge. For the PCDN design, PPIO does not propose a new data delivery scheme; instead, our architecture provides a supplementary solution for P2P transmission based on the existing CDN delivery scheme. This helps to ensure that the data delivery service can be compatible with the past scheme and become cheaper and faster. The figure below shows the design of the PCDN compatible with the existing CDN solution.
Fragmentation and Media
Fragmentation is the basis of P2P transmission technology. For P2P systems, fragmentation rules are critical. So, what is fragmentation? In short, files are broken up. More specifically, fragmentation is the division and numbering of a file, or a stream, according to a uniform rule. Each unit that is sliced out is called a Piece, which is the unit for P2P transmissions. If two Piece’s numbers are the same, then they are considered to be the same Piece. In the traditional P2P protocol, fragmentation is done by centralized means; for example, with BitTorrent, the rules of fragmenting a file are decided by the first user. All subsequent users that join the P2P network must obey the file fragmenting rule established by that first user.
However, when it comes PCDN, fragmentation is different. PCDN uses the technology of P2SP, where S refers to Server. The original source of P2SP data is not from a node, but a server with a standardized output, for example with the HTTP protocol, HTTP2, QUIC+HTTP/3, or other protocols. Such servers are standardized under the CDN system and they do not have the ability to slice. Therefore, when PPIO designed PCDN, the peer (which is the common P2P network node) must complete the slice via a distributed means, which requires all nodes to be fragmented according to the same fragmentation rule for the same resource. This ensures that the fragments of Peer1 (Node 1) and Peer2 (Node 2) are consistent.
PPIO’s fragmentation method is related to the structure of the file and to the streaming protocol used. Let me introduce PPIO’s compatibility and specific solutions for two mainstream streaming media transmission methods. The first will fragment the file into several parts, then will conduct a transmission, which includes HLS (HTTP Live Streaming) and DASH. The other one is HTTP, which helps to continuously stream a media file, e.g. HTTP+FLV. In addition, PPIO will support two video data distribution scenarios: live streaming and VOD.
The following is a brief explanation of some of the concepts behind fragmentation.
Segment: Large-scale unit of files, on-demand streaming, and live streaming. The length is not fixed. A file can be a segment; however, a live stream consists of multiple segments. The on-demand stream can be one segment or consist of multiple segments depending on the situation.
Piece: The smallest unit of P2SP scheduling. The bitmap in P2P will be represented as 1 bit.
Sub Piece: The final P2P protocol transmission unit which is smaller than the UDP’s MTU (usually 1350). The PPIO bottom layer uses the UDP protocol extensively. If a UDP packet is larger than MTP, the packet loss rate will increase greatly.
Transport Stream / TS: It is the original slice of the segmented streaming media. As HLS is used as an example, we have labeled it as TS. If taking DASH as an example, TS should be changed to FMP4.
Transport Stream Piece / TSP: The piece divided from TS
Video Segment / VS: For video-on-demand by HTTP, sliced by fixed size; for live broadcast, slicing based on I frame boundary and the minimum duration.
Video Segment Piece: Piece divided from the video Segment
The following section will discuss how we slice ordinary files. It should be noted that ordinary files here are not streamable video files and do not have the characteristics found in media that are streamed.
1. Fragmentation of Ordinary Files
FS： File Segment, the file is divided into FS by fixed size.
FSP：Piece divided from FS
- If the FS of a file is equally long, the last FS may be smaller.
- The FSP divided from the FS are equal, however, the last FSP may be smaller.
- One FS corresponds to a bitmap, and one bit in the bitmap corresponds to an FSP.
If it is a normal video file, then dragging the scrub bar (also known as the progress bar), as well as side-by-side broadcast functionality are supported. If the user drags the progress bar, the player will specify the range, calculate the FS index based on the range and the fixed size, then request the demand FSP in the relevant FS. After the relevant FSP is downloaded, the stream is merged and passed to the media player.
2. Video-On-Demand (VOD)
This section will focus on the fragmentation of the PPIO architecture for two streaming media modes.
2a. HTTP Segmented Streaming Video-On-Demand
HLS is used as an example here, though DASH and other methods are also similar.
Get m3u8 from the video server and the TS files list. In general, the TS divided from the same HLS on-demand are not necessarily equal in length. The TSP of each TS is equal in length, but the last TSP may be smaller. Wherein, one TS corresponds to one bitmap, and one bit in the bitmap corresponds to one TSP.
This also supports dragging: when the user drags the progress bar, the player specifies the TS index and then downloads the relevant content according to the TS index. After the relevant TS is downloaded, it is passed to the player to complete the playback.
2b. HTTP Continuous Streaming Video-On-Demand
To illustrate it, HTTP+FLV is used as an example.
The tag in the figure refers to the original data characteristics found in streaming media.
The VS from one FLV on-demand stream will have the same length, in which case the last VS may be smaller. The VSP divided from each VS, may also have the last VSP smaller than the rest. Each VS corresponds to a bitmap, and one bit in the bitmap corresponds to a VSP. The slice method of FLV on-demand is the same as file downloading.
Of course, this also supports the function of playing a video while downloading and dragging the progress bar:
When the user drags the bar, the player specifies the range, calculates the VS index according to the range and the fixed segment size, and requests the relevant VS. After the download is completed, the stream is merged and passed to the player.
3. Live Stream
From a perspective of slicing, live broadcasts are more complicated than video-on-demand. Because the live broadcast has neither the starting point nor the ending point, each user starts watching the live broadcast while downloading the intermediate data. And all users’ data should be fragmented according to the same rules; so not only to slice but also to synchronize. In addition, the general live broadcast also has a playback function.
Next, I will focus on the PPIO architecture regarding the fragmentation of two streaming media transmission modes.
3a. HTTP Segmented Streaming Live
HLS is used as an example here, though DASH and other methods are also similar.
The slice method of HLS live broadcast and the HLS on-demand are the same when assuming that the m3u8 file conducts the live broadcast and plays TS1, TS2, TS3, TS4, TS5. According to the setting of the standard latency, the live broadcast will start from a certain TS, for example, when playing from TS1, the live delay is the longest, so it is more likely to get data from the P2P network, thus the P2P bandwidth ratio will be higher. However, if it is playing from TS5, the live delay is the shortest, so the chance of getting data from the P2P network will be less and the P2P bandwidth ratio will be lower. If we play from TS3, then there will be a compromise.
3b. HTTP Continuous Live Streaming
HTTP Continuous Live Streaming means there is no end to streaming. Like before, HTTP+FLV is used as an example here.
The VS of an FLV live stream is not necessarily equal in length. The VS starts with the key frame as the boundary and slices the time with a minimum time unit. The slice algorithm ensures that the data of each frame in each VS is complete and contains a keyframe.
Assuming that the current live broadcast plays VS1, VS2, VS3, VS4, VS5, according to the setting of the standard live delay, if it is playing from VS1, the live delay is the longest, and may have more opportunities to get data from the P2P network, therefore the P2P bandwidth ratio is the highest. On the contrary, if it is playing from VS5, the live delay is the shortest, the chances of getting data from P2P will be less and the P2P bandwidth ratio will be the lowest. Therefore, playing from the VS3 results in a somewhat compromised formula.
It is worth noting that apart from two streaming media modes mentioned above, PPIO plans to support other media formats and protocols step by step.
Fragmentation only establishes the order of a P2SP download. The architecture behind efficient transmission is also just as important. In our next section, we will discuss how to use the P2SP network for efficient data transmission.
This is PPIO’s PCDN full-node architecture diagram. Here are the roles within the system.
1. CDN Node
A CDN node is the closest node to the user in the entire CDN architecture; users can get data directly from it. CDN nodes have been developed for many years and now support multiple transmission protocols, including HTTP, HTTP/2, QUIC+HTTP/3, etc.
2. Mapping Node
There is a unique resource ID in CDN, and there is also a unique ID in P2P. The resource IDs they have are not the same. We need to map different resource IDs when conducting P2SP cooperative transmission. This is the Mapping nodefunctions and it plays the role of mapping these two different IDs and providing query functions.
The Mapping node is a commercial node. The developer can develop the Mapping node by itself for the scenario of its own application because only the developer knows whether the unique resource ID in the CDN and the resource unique ID of P2P in the PPIO system is consistent or not.
If developers do not develop the Mapping node by themselves, they can also dock the public Mapping node. The public mapping node establishes the correspondence between the URL in the CDN and the RID in the PPIO system.
Is the Mapping node necessary?
No, because the Mapping node is just a correspondence. If correspondence can be achieved offline for developers directly with a simple algorithm, there is no need for the Mapping node.
3. Peer Node
A peer is a node in a P2P network. In the PPIO network, a peer may be a storage node (miners), a user, or both (that is, a node that uploads data and downloads data). When discussing the demand and supply of the PPIO network, or when it comes to blockchain design, I typically describe storage nodes and users as performing different roles. For the P2P network, most of the functions and code are consistent, so I have termed them simply as the peer node. They are also identical in the transmission protocol.
4. Tracker Node
The positioning of the tracker node in PPIO is similar to the positioning of the Tracker in BitTorrent. It is mainly used to manage the relationship between the RID (Resource ID, used to mark a file or stream) and P2P nodes. The Tracker records the relationship of the peer node that owns the resource for each RID. When a peer wants to acquire a resource, it queries the first batch of peers from the Tracker, before downloading the data from the peers that own those resources. Subsequently, from the first batch of peers, the “flooding” mechanism can be used to query more peers, and finally, a larger local peer library is formed until almost all peers are found.
A few things to consider:
Does Tracker represent the “center” of the network?
If there is a problem with this “center”, will the network get in trouble in the future?
Does Tracker have to exist?
Of course not, because Tracker only discovers the initial node. We have designed another set of mechanisms for discovering the initial peer node. This is the DHT - a distributed hash table. PPIO uses the KAD algorithm to implement the DHT. However, using the DHT to find the initial node is relatively inefficient, and is not as fast and efficient without the Tracker.
When developers develop applications based on PPIO, they can choose to implement a Tracker or DHT according to their own requirements. If you are pursuing efficiency and QoS, the Tracker method is better; if you want complete decentralization, you can only use the DHT method.
5. SuperPeer Node
In the delivery network side of PPIO, there is a special peer node and we have labeled as a SuperPeer. The node is automatically selected by our algorithm based on various technical conditions. There are many screening conditions for the SuperPeer, such as network conditions, storage conditions, long-term online conditions, mortgage status, and historical defaults. When all aspects of the technical requirements are met, it will be automatically upgraded to a SuperPeer.。
As a high-quality node, the SuperPeer will be given priority in terms of algorithms, and the returns and revenue will be higher.
6. Push Node
The Push node is used for warm-up scheduling. In short, it is a behavior that involves forcing a large number of peers to host certain content. Although PPIO has a mechanism to naturally find the topic and make it popular through the Overlay network, it is slow. Therefore, it can be pre-processed through Push node scheduling. When the time comes for the file to be delivered, there will already be a large number of peers in the network which possess this file and will have already uploaded it, which will greatly enhance the user experience for those who now need to quickly access it.
Of course, the full details of PPIO’s media streaming design cannot be fully articulated in a single blog post. In this article, I have mainly touched on the PCDN architecture of PPIO. In future articles, we will explore media streaming in more detail. Be sure to subscribe to us so you don’t miss the next article.