Skip to content

Creating Reliable Telehealth Video Conferencing Connections: Best Practices

Jerod Venema Jun 7, 2024 2:00:55 PM

 

In a perfect world, every patient and healthcare provider would have access to a reliable, high speed internet connection, on powerful devices, at all times. However, given the highly “mobile” nature of people’s lives today (pun intended) and the lack of broadband in many rural or third world communities, it is not uncommon to deal with poor or unreliable internet connections that can make telehealth video conferencing difficult or even impossible under certain conditions.

In this blog post, we will examine the various conditions that can decrease quality and reliability of video conferencing connections and the best practices you can use when choosing a video conferencing provider to ensure that you can connect with any patient or healthcare professional -- regardless of their device and location.

Note: If you are unfamiliar with the basics of video conference scaling for telehealth applications and the pros and cons of each topology (i.e. P2P, SFU, MCU), we suggest you read: Scaling Telehealth Applications: Best Practices

Problem #1: Rural Communities 

Telemedicine offers a way for healthcare providers to reach underserved populations in rural communities in ways that would not be possible under a more traditional healthcare model. Instead of driving long distances or waiting weeks for a healthcare provider to travel to a remote area, telemedicine enables doctors to connect with patients in a more efficient and timely manner.

However, this model can only work if those patients in remote areas have access to the internet upload and download speeds necessary to support a video consultation. Statistics show that 17 percent of the American population (55 million) lack access to the Federal Communications Commission (FCC) standard of what can be considered “broadband” internet -- 25 Mbps download/3 Mbps upload. Twenty percent of those lack access to internet speeds of 4 Mbps download/1 Mbps upload which, depending on the video conferencing platform used, could be insufficient for an average-quality video conference.

Total household bandwidth utilization must also be taken into the equation. While an average quality video consult may only require an upload speed of 1 Mbps and a download speed of 1 Mbps, this could consume the entire bandwidth provided to a rural household. If anyone in the same household attempted to do anything online or if another participant were to join the call, the conversation could break down if the video conferencing platform does not adapt flexibly to the available bandwidth.

Problem #2: Mobile Connections - Can you hear me now?

Mobile device connections can be unreliable. As people move from location to location with their mobile device, they can easily move from a 5G network to a 3G network, which has the potential to disrupt a multi-party video call if the corresponding change in available bandwidth is not accommodated. Even WiFi networks can become unpredictable if a participant is moving in and out of optimal range causing corresponding packet loss and increased bandwidth utilization for the re-transmission of dropped packets and/or keyframes. For healthcare providers to maintain the best possible video quality and user experience with these patients, they need to choose a video conferencing platform that can dynamically and seamlessly change bandwidth, topology, and connection types when changes in the network occur, without any disruption to the end-user.

Problem #3: Underpowered Devices

Not every device available on the market today is powerful enough to support an SD-quality multi-party video conference. While much of the population is using high-powered smartphones, laptops, computers, and tablets that are able to support larger group conferences, there is a subset of the population that does not have access to these devices.

Some budget mobile phones on the market today are unable to sustain a video conference, particularly as conferences move beyond simple two-participant peer-to-peer video calls. These devices simply do not have the processing power to simultaneously encode and decode multiple video streams. Similarly, older devices nearing the end of the manufacturer’s supported lifespan are also susceptible to this.

Best Practice # 1: Choose a video solution that monitors video quality and is capable of bandwidth adaptation.

When a video call exceeds the available bandwidth, it can result in frozen video, lower image quality, audio/video syncing issues, and packet loss - which can affect the quality of your conference. That is why you need to choose a system that will monitor the call quality and continuously adapt when things like packet loss, network jitter, and delays occur.

As a first step, many video conferencing systems will drop frames or frame rates when network conditions become poor. This is an OK practice and is often sufficient for maintaining a moderate-quality video conference, however, having the flexibility to easily access other options such as encoder quality settings and video resolution selection are important capabilities that should not be overlooked.

Best Practice # 2: Choose a video solution that is capable of topology adaptation. 

When too many frames are dropped or if the frame rate is decreased too much, call quality will suffer. In order to maintain the integrity of your video call, your video conferencing solution must be able to seamlessly switch topologies should the participants' bandwidth change too much during the call.

No one topology is suitable in all circumstances. Every connection type has its pros and cons, and it is important to strike the right balance between connection quality and cost:

  • Peer-to-peer (P2P): In a P2P topology, each of the participants is connected directly to every other participant in the call. If there are three people in a consult, each individual would be responsible for uploading two streams and downloading two streams. This requires the highest amount of bandwidth but provides the lowest cost, as a server is not required to make the connection.
  • Selective Forwarding Unit (SFU): In an SFU topology, each of the participants uploads their encrypted video stream to a server and the server then forwards those streams to each of the other participants. This decreases some of the CPU usage and is a good alternative when connections become less reliable.
  • Multipoint Control Unit (MCU): In an MCU topology, each of the participants uploads their video stream to a server and then the server mixes all of the incoming streams into a single stream and forwards them back to each of the participants. This requires additional server CPU but it allows underpowered devices and patients with poor internet connectivity to actively participate in the call.

Ask your potential video conferencing providers what types of topologies they use for their conferences and whether they are capable of employing multiple topologies at the same time. For instance, can a doctor and a patient start in a P2P connection and then transition into an SFU connection as a third participant (ie. a specialist) is added? If the patient then moves from a 5G to a 3G network, can an MCU connection be opened up for that individual while maintaining an SFU connection for the rest of the participants? This type of hybrid topology will give you the greatest flexibility and allow you to always maintain a bandwidth-optimized connection for each possible use case in your organization.

 

LiveSwitch is the real-time audio and video provider of choice among the leading brands in telehealth. If you’re thinking of building your telehealth application with WebRTC technology, get in touch with our team or check out our telehealth demo here!