Back to Blog

Why do you need to collect WebRTC statistics on the device side?

Discover why server-side monitoring isn't enough for WebRTC and how device-side statistics collection can transform your user experience.

Posted by

WebRTC device monitoring illustration

You've got your dashboards glowing green, showing healthy servers and plenty of headroom for your applications, databases, signaling, TURN, and media. Yet, the complaints keep rolling in: users can't connect to their meetings or are experiencing terrible call quality. Even on slow weekends, when your APM tools report everything is nominal, the frustration mounts. This is the perplexing reality of WebRTC user experience and monitoring, a world where the health of your infrastructure doesn't reflect the user's perception of quality.

The Limitations of Traditional APM

Traditional Application Performance Monitoring (APM) solutions, while excellent at gauging the health of your servers and backend systems, fall short when it comes to WebRTC. They were never built for that - they tell you if your engine is running smoothly, but they can't tell you if the road ahead is full of potholes, if the driver is distracted, or if the tires are flat (well, they might know about the tires… but flow with me here for a second). With WebRTC, a significant portion of the user experience is beyond your direct control and not visible through server-side monitoring.


Two critical factors are often overlooked:

  1. Server health vs. user device experience: Your APM tracks the performance of your servers, not the individual user's device. A server can be operating perfectly, yet a user's device might be struggling with CPU overload, low battery, or an outdated browser. These client-side issues directly impact WebRTC performance but remain invisible to server-focused monitoring

  2. Limited control over the user's environment: The pieces of your WebRTC solution that you control is a small part of a much larger puzzle. It doesn't encompass the user themselves, their specific device, their geographical location, or, most importantly, the network they are using. This lack of visibility into the user's environment is a major challenge for WebRTC

I remember once being told by an IT person that 90% of the problems users complain about end up being their own network. It is likely more than 90% for those of us running a well oiled and cared for infrastructure.

This highlights the fundamental gap between what your servers can tell you and what your users are actually experiencing. Relying solely on server-side monitoring for WebRTC is akin to navigating a complex obstacle course blindfolded, assuming everything is fine as long as your own movements are smooth.

Reactive Troubleshooting Without Any Data

Reactive troubleshooting without any data

So, what do you do when a user complains about poor quality or a dropped connection?

Your first instinct might be to consult your monitoring dashboard, looking for a server-side anomaly at the exact time of the complaint. But how would that even help? Your APM dashboard might show a pristine green, offering no clues as to the root cause of the user's frustration.

This often leads to a frustrating back-and-forth, a "ping-pong game" of questions and requests that quickly becomes unsustainable. You might find yourself barraging the user with intrusive questions:

  • "What exactly did you do?"
  • "What do you mean by 'bad quality'?"
  • "Can you try again, but this time, remember to save and send me a webrtc-internals dump file?"
  • "Why didn't you open the webrtc-internals tab before starting the call? Can you do it again please?"

While WebRTC-internals can be a valuable debugging tool, it's an advanced concept for most users and an incredibly inefficient way to troubleshoot widespread issues…

This approach, while perhaps viable for a handful of users, simply doesn't scale. Imagine trying to support 10, 100, or even 1,000 users this way. Now, envision the monumental task of assisting 1,000,000 users with this reactive, manual process. The sheer volume of complaints would overwhelm any support team, making it impossible to provide timely and effective assistance. Each "ping-pong" interaction consumes valuable time and resources, diverting your team from proactive development and optimization.

Moreover, placing the burden of diagnosis on the user creates a negative experience, further escalating their frustration. They're looking for a solution, not a technical interrogation. Relying on users to provide technical data is inherently unreliable; they may forget, struggle with the instructions, or simply lack the technical aptitude to do so accurately. This reactive, manual troubleshooting method is a dead end for any serious WebRTC service aiming for significant user adoption.

Being Proactive About WebRTC Monitoring

Being Proactive About WebRTC Monitoring

If you are serious about providing a high-quality WebRTC service, then a fundamental shift in your monitoring strategy is essential. The answer isn't to harass your users for obscure technical logs or to endlessly scrutinize server dashboards that offer no real insights into the user's plight. Instead, the solution lies in proactively collecting comprehensive WebRTC statistics directly from the end-user devices themselves.

This means instrumenting your WebRTC application to gather critical data points such as:

  • Network conditions: Jitter, packet loss, round-trip time, bandwidth estimations, and changes in network type (Wi-Fi, cellular)
  • Device performance: CPU and memory usage, battery levels, and details about the user's hardware and operating system
  • Browser and WebRTC API metrics: Browser version, WebRTC API usage, and specific WebRTC statistics like video resolution, frame rates, and codec information for both sent and received streams
  • User environment details: Location data (if applicable and consented), and information about any firewalls or proxies affecting connectivity

Collecting this data empowers you to:

  • Identify trends and patterns: Pinpoint common issues across your user base that might not be visible on a per-session basis. For example, consistently high packet loss from a specific region or a particular browser version causing performance degradation
  • Proactively detect and address issues: Instead of waiting for user complaints, you can set up alerts based on key performance indicators (KPIs) like elevated packet loss or low frame rates, allowing you to investigate and resolve problems before they impact a large number of users
  • Troubleshoot efficiently: When a user does complain, you'll have a rich dataset at your fingertips, allowing you to quickly diagnose the root cause of their issue without extensive back-and-forth. This data can pinpoint whether the problem lies with their network, their device, or a specific server-side component
  • Optimize user experience: By understanding the real-world performance metrics, you can make informed decisions about infrastructure improvements, client-side optimizations, and adaptive quality adjustments to ensure a consistently high-quality user experience

Conclusion

Failing to collect these vital WebRTC statistics directly from end-user devices is tantamount to running blind. You're operating in a reactive mode, constantly playing catch-up, and unable to truly understand or control the user experience. Investing in a robust WebRTC monitoring solution that gathers client-side data isn't just a "nice-to-have" - it's a critical component for the success and scalability of any serious WebRTC service. It transforms your approach from reactive guesswork to proactive, data-driven optimization, ensuring that your users consistently have a positive experience, not just a green dashboard.

Need help figuring out a solution? One that keeps that monitoring data in YOUR hands while giving you the best visibility into the potential issues users are having?

Give us a ping 😉

Why do you need to collect WebRTC statistics on the device side?