Skip to content

IceLink and WebRTC

Anton Venema Jan 16, 2013 8:03:00 PM

Click me

IceLink has always been fundamentally compatible with WebRTC. Since its initial release, we have developed the peer connection negotiation algorithm to be completely interoperable with the open standard being developed by the team working on Google Chrome. By choosing to work off the same RFC specifications from day one, we have ensured that IceLink keeps as many options open as possible for you when it comes to interactions with third-party libraries.

But getting a WebRTC session going involves a lot more. In fact, you need twelve (!) additional components:

  • An audio capture engine that can read raw audio samples from the device microphone.
  • An audio render engine that can play back audio samples to the device speakers/headset.
  • A video capture engine that can grab raw images from the device camera.
  • A video render engine that can play back raw image to a visible on-screen container.
  • An audio encoding engine that can convert raw audio samples to compressed frames.
  • An audio decoding engine that can convert compressed frames back to raw audio samples.
  • A video encoding engine that can convert raw images to compressed frames.
  • A video decoding engine that can convert compressed frames back to raw images.
  • An audio packetizer that can convert compressed frames into a sequence of RTP packets.
  • An audio depacketizer that can convert a sequence of RTP packets back to compressed frames.
  • A video packetizer that can convert compressed frames into a sequence of RTP packets.
  • A video depacketizer that can convert a sequence of RTP packets back to compressed frames.

Basically, the sequence looks like this:

Audio:
Mic > Capture > Encode > Packetize > Network > Depacketize > Decode > Render > Speakers

Video:
Camera > Capture > Encode > Packetize > Network > Depacketize > Decode > Render > Screen

Phew! That's an exhaustive list, but it doesn't end there. The encoding, decoding, packetizing, and depacketizing have to line up exactly with other implementations if you plan to have them talk to each other. Unless the packet format and contents of the packet are designed the same way, you can't have cross-communication between libraries.

The good news for you is that we take care of all of this. Completely. You don't have to touch a video codec or worry about RTP packet formats or how to access the webcam from .NET. You just create a PeerConnectionHub and let us take care of the rest. That's the idea behind the WebRTC extension for IceLink, which is available for .NET and JavaScript at the time of this writing.

Other platforms are coming soon (mobile devices are a priority), including a drop-in Java .jar file for web browsers that will let you use native WebRTC functionality when supported and otherwise gracefully backoff to the Java plugin.

Reimagine your live video platform