How Peer5’s Analytics Tools Provide Insight Into P2P Efficiency and UX, Both Past and Present

Learn how to use Peer5's range of analytics tools to keep tabs on your network and refine your corporate broadcasts.

How Peer5’s Analytics Tools Provide Insight Into P2P Efficiency and UX, Both Past and Present

Peer5 strives to provide its users with the highest quality video using the fewest network resources possible. Our advanced analytics tools not only provide detailed insight into both of these metrics, but also provide powerful, easy-to-use tools that help to identify any issues with the network.

In this article we will give an overview of the four analytics sections available through the Admin Console:

  • Analytics
  • Realtime Analytics
  • Advanced Analytics
  • User Analytics

Analytics

The Analytics and Realtime Analytics sections both give us an overview of the network’s performance using the same graphical layout.

The first graphics we see show us the ratio of HTTP to peer to peer (P2P) delivery and number of concurrent viewers. Performance of the P2P network is measured by how much of the traffic is offloaded to peering. The greater the ratio of P2P to HTTP, the more Peer5’s technology has succeeded in offloading bandwidth from the network using P2P.

Additionally, we can gauge the User Experience (UX) by visualizing the rebuffering that was experienced either globally, groupally, or individually. Rebuffering is any time the video should be playing but isn’t, such as when there is buffer starvation (network slowdown) and the video freezes. When this happens, it is because the device cannot download the stream fast enough in order to maintain a continuous, fluid video signal. So 1% rebuffering indicates that 1% of the time the image was frozen.

Using the provided analytics tools, we can see, for example, if there was repeated rebuffering throughout the stream or if there was a terminal rebuffer that caused the user to stop participating. We can also identify whether the rebuffering is affecting just one user/geography or all users equally, which might suggest client/delivery issues vs origin/encoding issues.

We generally shoot for an overall average of 0.5-1% rebuffering as a target. Below 0.5% is not at all unrealistic either. Rebuffering will show up in red. We consider anything above 3-4% to be very disruptive.

You can select a timeframe by using the mouse to click and drag within any of the graphics, which will update all the graphics to reflect only this period in time. Go back in your browser to undo the selection.

For more targeted statistics you can also filter devices by country, integration type or by specific device ID using the “Add a filter +” option in the upper left-hand corner.

By drilling down we can check to see if individual users or specific groups of users are skewing the rebuffering results. For example, if 3 machines out of 100 have 50% rebuffering, that brings the total up to 1.5% rebuffering, when in fact the incidents were actually very isolated.

The difference between Realtime Analytics and Analytics is that Realtime Analytics shows us no more than the last hour’s results, whereas Analytics includes all historical values.

These statistics are very effective for troubleshooting your network, for if a user experiences rebuffering during the test they are very likely to also experience it during a real event.

Advanced Analytics

Advanced Analytics breaks down the performance results by groups, be it by office or department (which Peer5 detects automatically based on IP address), by city, country, ISP, platform, browser, OS, or by using custom labels that the customer may wish to apply. This allows us to drill down and differentiate infrastructure or integration problems from individual device issues.

If, for example, we see an office in the Performance Breakdown that has a higher than average rebuffering %, we can hover over that office with the mouse, causing two magnifying glasses to appear. By clicking on the one with a plus sign we may add a filter to the results, and all of the graphics on the page change to reflect that one selected office, allowing us to further break down the results. (This can also be achieved by selecting the corresponding segment of the doughnut-shaped pie chart on the left.) We can review what browsers the viewers in that office were using, if there was a difference between the experience of those using Windows and those using Mac, etc.

User Analytics

If we then select the User Analytics section from the sidebar on the left, the timeframe selection and any filters applied in the previous sections are conserved. Here we can see detailed results for individual users. If you select one specific user, the charts below show you the different values for that specific viewer’s playback session. You can even see a timeline of that user’s experience.

The User IDs are customizable, so instead of a string of random characters we can assign a name or e-mail address in order to more easily identify the devices on the network.

The User Analytics section is where we granularly examine network performance as well as UX. Who had the problem? When did the problem arise? And how long did the problem persist?

These powerful analytics tools, paired with our easy-to-implement silent tester, allow us to detect and solve any network or device issues before they have the chance to disrupt an important event.

For more information on our silent tester, take a look at this short walkthrough.