Trust me, I’m a medically trained device
On the morning of the 27th of June 2017, Stanford Medicine and Duke University School, welcomed a few volunteers into their offices. Every participant had to sign a consent form before they could collect two devices that will monitor their biomedical stats for the next few years (heartbeat, activity, sleep pattern etc.). The project is a 4-year collaboration with Verily Life sciences, a subsidiary of Alphabet, where researchers want to develop a platform that combines the output from the wearable sensors of 10,000 participants, genomic data and other medical information. Over time, they hope to predict diseases, identify new symptoms, and tailor treatment to individual patients. Leveraging the Google computing infrastructure and its Cloud Platform, which meets the required standards around data safety, privacy and security, the team is keen to minimize cyber risk. In the case of wearable technology, the cyber risk may be that patient data could become public or medical history could end up deleted or tampered with, disrupting treatment or years of work.
Whilst the latter outcome is very negative, there are often ways to address some of these failures, either by using the latest backed up physical data, or rerunning tests on the patients to identify the corrective action plan. Remediation could come at a high financial and reputational cost to the healthcare provider, as we saw with the #wannacry ransomware which destabilized the National Health Service (NHS) in England Scotland, this year.
In some cases, though, remediation may not be possible at all. This is particularly relevant in the case of implanted microchips, where the threats are time critical. Think of a wirelessly controlled microchip implanted under the skin of an osteoporosis patient, which administers drug to him on a periodic basis. A bad-intending cyber attacker could induce an overdose to the patient which could be fatal. In the case of Vice President Dick Cheney, these threats were real. Cheney famously suffered five heart attacks, and had wireless pacemaker implanted as a result. But in 2013, he decided to disable the wifi on his heart pacemakers following rumors of an assassination attempt.
Whilst these hacks were considered more likely to be targeted 5 years ago, there is a wider concern today around public safety.
Indeed, earlier this year, the US Food and Drug Administration announced that cybersecurity vulnerabilities were identified in St. Jude Medical’s cardiac device. Interestingly, the issue was not with the device itself but with the in-home transmitter connected to it which shared data with the medical team, to update them on the heart condition of the patient. So what was the issue? The problem was that, the transmitter could also send commands to the pacemaker enabling any bad actor to interfere. The announcement forced their stock price immediately 4.9% down.
In the UK, similar concerns were also being raised. In 2016, Oxford University and University College London, studied a neurostimulator, an IoT device that helps regulate nerve signals and treat Parkinson symptoms or Tourette. Once again, the academic bodies found that the device was vulnerable to attacks: a hacker could switch off the device or wear down its battery resulting in severe impact on behavior, cognition as well as causing physical pain.
So what needs to be done?
- Understand what data you have: Companies and organizations need to have a centralized record of the data they hold, dictionary to accompany them and a way to establish clear lineage. Establishing data governance is key. Just this month, the Information Commissioner’s Office warned the NHS over an agreement they struck with Google Deepmind, to share 1.6 million kidney patient records, which they consider to be ownership of the ICO as opposed to the NHS.
- Acknowledge the risk and act: Medical bodies involved with patient data need to have their security features independently assessed at least once a year to ensure they can withstand an attack. Where the infrastructure is deemed inadequate, a solution needs to be defined and funded to enable timely remediation. That could require the adoption of robust identification verification systems: Centrally managed identity platforms with strong encryption that can host large amounts of data.
- Provide training: All functions involved with patient data (administration, executives and IT) require training. A recent Information Systems Audit and Control Association report, indicated that the training budget for healthcare bodies is currently $2,500 or less per year, with one in four bodies receiving less than $1,000. Considering the risks involved in healthcare IoT security, this is clearly inadequate.
- Collaborate: sharing updates and best practice within the industry will strengthen the response to threats and enable cheaper more robust joint-programs to combat cyber threat
Bibliography
Harm, K. (2012, Feb 16). Microchip Implant Gives Medication On Command. Retrieved from Scientific American: https://www.scientificamerican.com/article/microchip-implant-medication/
Harm, K. (2012, Feb 16). Microchip Implant Gives Medication On Command. Retrieved from Scientific American: https://www.scientificamerican.com/article/microchip-implant-medication/
Klein, A. (2017, Jan 21). Alert: you’re about to catch something. New Scientist, p. 16.
Maxmen, A. (2017, Jul 6). Giant Health Studies Try to Tp Wearable Electronics. Nature, p. 13.
Moffatt, S. (2017, Jul 24). Life-or-death decisions: How do we safeguard healthcare IoT? Retrieved from Tech Target: http://internetofthingsagenda.techtarget.com/blog/IoT-Agenda/Life-or-death-decisions-How-do-we-safeguard-healthcare-IoT
The biggest healthcare breaches of 2017 (so far). (2017, Jul 20). Retrieved from Healthcare IT News: http://www.healthcareitnews.com/slideshow/biggest-healthcare-breaches-2017-so-far?page=13
Trust me, I ‘m an algorithm. (2017, Jul 15). New Scientist, p. 5.
Vaas, L. (2013, Oct 22). Doctors disabled wireless in Dick Cheney’s pacemaker to thwart hacking. Retrieved from Naked Security: https://nakedsecurity.sophos.com/2013/10/22/doctors-disabled-wireless-in-dick-cheneys-pacemaker-to-thwart-hacking/
Verily Launches Landmark Stusy with duke & Stanford as First Initiative of Project Baseline. (2017, Apr 19). Retrieved from Verily.com: https://static.googleusercontent.com/media/verily.com/en//press/articles/BaselinePressRelease.pdf
2 comments on “Trust me, I’m a medically trained device”
Comments are closed.
Dear Lana, thank you for your interesting post. I think data security in the healthcare sector is a very important topic that deserves more attention. Not only healthcare providers and businesses in the industry but also suppliers of the industry and users (patients) should be more aware of how valuable the healthcare information is. The data should be handled as cautiously as financial/ banking data. KPMG concludes in a study, that the healthcare industry is lagging behind in terms of securing its data. The researchers compare it for example to the financial sector where data security had a high priority for many years. As you described, with the IoT increasingly finding its way into healthcare, the problem becomes even more pressing.
KPMG, “Healthcare and Cyber Security: Increasing Threats Require Increased Capabilities”, https://advisory.kpmg.us/content/dam/kpmg-advisory/PDFs/ManagementConsulting/2015/KPMG-2015-Cyber-Healthcare-Survey.pdf
Hi Lana, Great Article. Thanks for sharing your thoughts
I also mentioned the St. Jude vulnerability in my Week 1 paper. It’s interesting to iterate the importance of security and vulnerability testing of medical devices. I was researching this is a topic as well. When I was reading about the history of computers in medical devices, I stumbled onto the Therac-25. While it was not an implant, it discusses how computer controlled anything can be fatal without the appropriate testing.
The Post-Morten paper was titled, “Unintentional Accidents: An Investigation of Therac 25 Accidents” written by Nancy G. Leveson, and Clark S. Turner, University of California, Irvine. (1993)
The Therac-25 was a radiation therapy machine produced by Atomic Energy of Canada Limited (AECL) in 1982 after the Therac-6 and Therac-20 units. It was involved in at least six accidents between 1985 and 1987, in which patients were given massive overdoses of radiation. (Baase, 2008)
This was one of the first examples of computerized medical devices that have inadvertently harmed (killed) another human being. While this device was not implanted, it was the foundation and evolution of computerized medical devices.
Personally, I don’t have any objections to implanted medical devices, provided the devices receive mechanism is from the systems in the body only. THe device should be hardened to a point where it can not transmit anything into the body to disrupot one of the major systems.
E.g. transmit signals or modify chemicals in the body.
It should only be able to transmit using encryption in the device firmware. Rather than to implant a device, it might be better for the device itself to interface with a probe under the skin (using a female locking connector) and then “plug into” the body. (Similar to what you see in the Matrix) This can ensure that the device can be removed in the event that it is not functioning correctly.
Bibliography
(1993). Retrieved from http://courses.cs.vt.edu/professionalism/Therac_25/Therac_1.html
Baase, S. (2008). Retrieved from Baase, Sara (2008). A Gift of Fire. Pearson Prentice Hall