By Robbie Hughes, Founder and CEO, Lumeon

As we continue to take our customers live with new solutions that help them deal with the current crisis, we have been reflecting on what the new normal will look like for health systems and what the needs of our customers are likely to be.

Below I’ve put together some thoughts on key ingredients for success, from the point of view of the health system. It will appear as a series of articles, this being the first. It’s worth noting that it isn’t a complete list, but it should provide food for thought. My hope is that the market will start to innovate and start to provide some of these capabilities in short order.

Mandatory disclaimer: Please note that I am neither a regulatory expert nor a scientist, medic, epidemiologist or in fact anything other than an engineer who wants to see this problem fixed, so please take the opinions below as just that and nothing more.

Central reporting registry and testing 

The first, and possibly the most impactful piece that needs to be in place as we move forward is a central reporting registry for COVID-19 and a testing regime that can be used to keep it up to date. Whilst I believe that there might be some value in a ‘passport’ that an individual can carry, without a central way of authenticating the contents of this passport, it will be worse than useless. The harder piece, however, is the testing that needs to happen for the registry to be useful, which I will get to shortly.

The decentralized way in which healthcare data has been shared to date is simply not going to cut it for this purpose: Centralized real-time reporting will be needed to track possible outbreaks, to say nothing of making sure that patients are not entering contamination-free zones without checks, and this is going to take more than an app.

The registry needs to be permanently up to date to ensure that it has utility, and the ability to securely authenticate a patient in this registry – with their most recent results in near real-time – will be vital for care, as well as the proper functioning of society. I would see an organization like the Joint Commission being well placed to sponsor some of this capability (to the extent that it isn’t an international problem with an international solution).

The private sector may be well placed to provide some of the capability (all this data should end up in the EMR? Is it already there?), but it’s likely that the pipes required to make this information flow to where it needs to be, at the pace that it needs to, won’t exist yet – perhaps due to privacy concerns, perhaps simply due to fragmented capability distribution amongst differing versions of a core infrastructure that is challenging to upgrade.

Beyond the registry, which should be a relatively straightforward piece of software to build once ownership and scope are agreed upon, my bigger concern is testing. In particular, I believe that linking and normalizing lab data back from the different tests that are starting to come online, is going to be problematic. Is a negative test always a negative test? How can we trust that the processes by which the sample was collected were perfect and that there was no cross-contamination? In many cases, early testing kits that have been distributed in both the US and (notably) the UK, have proven to be both unreliable and only remotely effective, when the viral load is so high that the patient is probably already on a ventilator. A negative result from this kind of test is unlikely to be of the same quality as others that might be more sensitive, for example.

Challenges to be solved

All of this speaks to several challenges that are not easy to solve quickly and include, but are not limited to:

  • Regularization and standardization of testing processes and the tests associated with each
  • Controls around the distribution and access to tests to ensure that they can be traced in the case of problematic or contaminated batches
  • Licensing for labs to perform these tests in an environment that is at substantial risk of contamination
  • Distribution of validated results with the ability to not only share a binary result, but also the validity limitations of that result, with some sort of accompanying chain of trust – to ensure that the result can be invalidated if upstream problems are found retrospectively.
  • Regular validation and testing of both tests and those administering the tests
  • Retesting protocols that can be linked to the chain of trust, to identify who needs to be tested and when, which may of course be a function of the type of test that was carried out.
  • And, my personal favorite – how to align these standards and protocols internationally so that a patient that has tested negative in one jurisdiction can be trusted to be proven negative in another without a need for retesting?

People far smarter than I are already planning for these realities, so I lay these out principally as a baseline by which we need to consider some of the downstream problems that these will create for healthcare providers.

The good news is that over time, I would hope much of this will go away as the bar for testing accurately goes down (kit in the mail home testing) and the possibility of herd immunity and/or a functional vaccine materializes. In the meantime, we are going to have a challenging time operating in a world where we don’t really understand what it means to have had the virus and whether or not a test that reads positive or negative can be trusted with 100% accuracy.

For my next piece, I’m going to focus on screening and how that might go some way to mitigating some of the problems we’ve described above. In the meantime feel free to send me your views or go to our Coronavirus webpage to see what we’re doing to help our customers address the challenges of COVID-19 right now.

Keep safe everyone.