98 points by synthetix 7 months ago flag hide 18 comments
mralexhay 7 months ago next
Great article on implementing WebRTC for secure zero-client video conferencing! I've been looking into WebRTC for a project I'm working on and this proves to be quite useful. Thanks for sharing!
vincentgarreau 7 months ago next
Quick question for the author: Did you consider using the STUN/TURN protocol for handling network traversal in this project? I've heard they're great for such use-cases, but can't seem to wrap my head around it.
mathiasbynens 7 months ago next
You're correct - STUN and TURN help manage network traversal. The STUN protocol provides a mechanism for a client to learn its public IP address and port, which helps in NAT traversal.<br>TURN, on the other hand, is used when NATs are restrictive and STUN alone cannot establish communication. TURN allows for both media and signaling relaying between two entities. This is great when you're working with firewalls, restrictive IP ranges and other tricky networking setups.
greghouse 7 months ago prev next
This is fascinating. I'm curious to know if there have been any performance benchmarks taken for using WebRTC in a zero-client scenario. Would love to see some comparisons to other video conferencing solutions.
paulirish 7 months ago next
There have been some performance benchmarks taken by various folks throughout the years. I'd invite you to look into some of these resources for reference:<br><ul><li><a href='https://www.measurementlab.net/blog/webrtc-performance-analysis/'>WebRTC Performance Analysis</a></li><li><a href='https://www.takipi.com/blog/how-web-rtc-affects-browser-performance/'>How WebRTC Affects Browser Performance</a></li><li><a href='https://www.sitepoint.com/web-rtc-performance-considerations/'>WebRTC performance considerations</a></li></ul>
jakearchibald 7 months ago prev next
This is awesome! I might build this into https://offlinefirst.org as a video conferencing solution. Would love to chat with you further about the security implications of running this on a public server.
jaffathecake 7 months ago next
Speaking of Offline First, is it possible or advisable to run something like this in a PWA? If I understand correctly, WebRTC technically requires a live connection, but I'm wondering if there are some workarounds involving Service Workers and background sync.
getify 7 months ago next
To use WebRTC offline, you'd probably need to use an interim technology for caching video contents and eventually rehydrate the WebRTC when the network becomes available.<br>You could potentially utilize IndexedDB and the MediaRecorder API to store media captured during the 'offline' period, then retroactively stream this to the WebRTC peer when the user reconnects.
addyosmani 7 months ago prev next
What are the options for integrating audio processing/adaptive echo cancellation in these types of setups?
saulmadewesome 7 months ago next
WebRTC handles audio processing by default, with features such as AEC or Acoustic Echo Cancellation, lip synchronization, and adjustable jitter buffer. Web Audio API can also be used for more fine-tuned audio control: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API#Introducing_the_Web_Audio_API
johnresig 7 months ago prev next
There's also the Opus codec for audio compression and additional multiple audio processing libraries you can use, such as howler.js, Howl.js, or wavesurfer.js.
davatron5000 7 months ago prev next
How do you manage latency and bandwidth concerns when multiple users are transmitting video simultaneously?
awscloud 7 months ago next
WebRTC's adaptive bitrate feature optimizes bandwidth usage by adjusting the video resolution and frame rate, depending on the available peer connections. This is configurable and relies upon the SVC (Scalable Video Coding) Video codec.
googlechrome 7 months ago prev next
Simulcast is a technique used to reduce latency by splitting the video stream into multiple substreams with varied resolutions. This enables the browser to send different qualities to the same peer, depending on their network and device capabilities.
slightlylate 7 months ago prev next
In our implementation, we relied on the lower resolution substreams at the beginning of the call and a gradual shift towards higher resolution as both peers network conditions stabilized. This enabled smoother initial handshakes and prevented bandwidth starvation later.
mozhacks 7 months ago prev next
What are your thoughts on using a server-side SFU (Selective Forwarding Unit) for video transmissions in WebRTC? This approach is meant to provide more control over video conferencing.
justinfagnani 7 months ago next
I think this is a good idea, as implementing an SFU can offer greater flexibility and allow for more complex architecture. There are several open-source SFUs like Jitsi, Janus, Kurento, and mediasoup that you can look into. Keep in mind that they would add server-side compute and potentially bandwidth expenses.
vjeux 7 months ago prev next
Another option is using a multi-party-call library on the client-side, like EasyRTC. This would avoid server setup and handle the scaling issues while retaining SDP management, overcoming the complexities of the WebRTC API.<br>This, however, comes with the caveat of increased CPU utilization on the client end, requiring sufficient handling mechanisms for device resources.