Managing Bandwidth in Healthcare
© Paul Christian Nelis, 18 October, 2024
There are several reasons for variable demand on a healthcare system’s data network, especially out-bound and in-bound connectivity needs from a given facility. Managing that demand is becoming increasingly important for clinics and hospitals of all types. An unartful approach to this management will typically either carry more risk than necessary, or be more costly than necessary. Before looking at solutions, let's examine some current drivers of need in healthcare.
IoT stands for Internet of Things, and it represents the class of devices which are constantly funneling data back to a system to be monitored and potentially acted-upon. With the expansion of IoT devices in healthcare, there is burgeoning demand for sending telemetry data to analysis engines. While some of these engines run in on-premise systems, a growing number of them are actually hosted by Software-as-a-Service (SaaS) vendors themselves, or hosted by Microsoft Azure, Amazon’s AWS, or the Google Cloud Platform (GCP). In-Patient systems might be detecting a patient’s location, pulse, oxygen levels, or heart activity. Some of these detectors are even available through wearable devices, like a watch or similarly portable device.
These IoT systems stream constant, or near-constant data to the systems designed to interpret that data and suggest (where appropriate) subsequent action. More and more of these devices appearing in more and more clinical settings means a greater and greater impact to network bandwidth–especially the link the clinic or hospital has with the Internet.
This is a source of bandwidth consumption which can be difficult to predict. It’s not just the number of patients at any given time, but the patients presenting symptom type Alpha monitored by device type Theta, or Delta, Epsilon, etc.. The total volume of telemetry data attributable to IoT devices can thus vary greatly by both the number of patients, and the needs of those specific patients at the time. The mix of patients and devices are always in flux.
As with IoT devices, medical imaging is becoming more and more common in diagnostic cycles, and the more advanced forms of imaging, such as PET/CT and MRI can carry very large data volumes. Whether it’s a matter of getting a patient’s image to a reading radiologist outside the hospital network, or receiving prior images from outside sources for the internal radiologist to compare with today’s images, variable volumes in this activity, and variations in the mix of technologies in use, can have a significant impact on the network bandwidth required in and out of a hospital or clinic.
Similarly, it’s becoming increasingly valuable to pass a given medical image or other set of diagnostic data past an Artificial Intelligence algorithm for pre- or post-processing. In some cases this is just to make the image clearer for the radiologist. In some cases it’s to identify breast density, or to point out likely areas of interest in the study for the radiologist to review. Whatever the reason for the AI, the AI models doing that work are increasingly hosted by cloud providers, just like the IoT devices. This means that moving those bulky studies out to an external model for review can create additional demands on hospital or clinic bandwidth.
Real-Time collaboration, and remote-assisted therapies are another burden to the “ordinary” bandwidth required for an organization. Depending on the volume of surgeries or diagnostic procedures being performed with participants not actually in the hospital, required overall bandwidth can vary greatly. For many such systems, their use is highly episodic–perhaps they need that bandwidth just 20 contiguous minutes, seven times per day. Setting the overall network capacity to accommodate that workload, twenty-four hours per day, seven days per week, can mean paying top dollar every day for a need you have less than 10% of the time.
The temptation with many management teams is to simply look at average consumption, and fix capacity for that average usage. This design approach is inherently vulnerable to overwhelming the network at times of peak demand. A network which cannot support the volume of traffic being generated appears just broken. Things don’t work; error are generated by the applications that are starved for access, and the help desk gets a flood of calls.
Hospitals and clinics must have enough bandwidth; this is healthcare. If you don’t have sufficient bandwidth to support the remote management of the MRI system, the Radiology Tech can’t take the pictures when the patient is there to be scanned. Delays in that imaging can turn into delays in treatment. If you don’t have sufficient bandwidth, you may be unable to deliver the radiotherapy a cancer patient demands. If you don’t have sufficient bandwidth, you may not know that Dolorus Jones has just fallen out of her bed, down the hall. There are very very few occasions in which limiting IT resources to average activity makes sense.
While hospitals and clinics must have sufficient bandwidth to support all the functions required, few budgets are unlimited. Nobody wants to pay a 24 hour rate for something they actively use only 3 hours of the day.
So some organizations have turned to “burstable” bandwidth, or “bandwidth-on-demand” solutions to allow them to have a baseline cost that’s smaller than their peak need, yet still have access to additional bandwidth when demand is highest. This is an attractive compromise, but burst, or on-demand capacities are not typically guaranteed. They are ordinarily sold to multiple organizations in a given geography, with the commitment that some minimum capacity will always be available, and that, when the network provider’s own capacity allows, higher or much-higher capacities may be tapped. Like some home Internet connections, the clinic or hospital’s bandwidth above that minimum might be limited by some other organization simultaneously trying to consume on-demand services from the same network provider.
Because this is healthcare, and the stakes are high, many organizations will reserve the bandwidth they require at peak utilization as a risk mitigation measure. This can be costly, especially when factoring in redundancies, but it’s the best way to guarantee performance when it’s needed. As a sole strategy, though, it can be difficult to maintain as new technologies are constantly funneling into most organizations. Understanding “peak demand” and resizing connections to accommodate that peak can be an almost constant exercise whereas organizational budgeting typically occurs annually.
Quality of Service (QoS) features in a network underpin another strategy organizations will employ to differently manage the needs of various systems in their environment. QoS features allow a network to give preferential treatment to one type of traffic over another. Leveraging QoS techniques, a hospital or clinic might ensure the equipment assisting surgery is guaranteed network capacity in a way that other systems are not. QoS features can ensure, for example, that the radiotherapy system can always use up to 50 Megabits per second capacity over a provider’s Gigabit connection—no matter what else might be going on. When traffic gets busy, data for other systems might wait, but at least that minimum 50 Mbps allocated to the radiotherapy system will always be available.
The combination of reserving minimum bandwidths for vital systems, and employing capacity-on-demand strategies to protect overall performance in peak traffic circumstances represents a powerfully balanced approach between risk averse and cost conscious architectural designs for healthcare networks. Wiring them together with sufficient forethought, monitoring, and automation can allow such a strategy to provide resiliency measures as well, supporting disaster recovery and business continuity planning.