How poor UX in medical devices becomes a security risk
Written by 3Point1 | Electronic Product Design Consultancy
When a medical device becomes compromised, the post-mortem almost always focuses on the technical layer: firmware vulnerabilities, unpatched software and weak encryption. These are real problems that deserve serious attention, but some of the most consequential security failures in connected medical devices don’t start in the code. They start in the interface.
The relationship between user experience (UX) and cybersecurity is rarely discussed in device development, and that’s a problem. When designers treat usability and security as separate concerns, human behaviour fills the gap between them. And human behaviour, under pressure in a clinical environment, is predictably unpredictable.
Why UX is a cybersecurity issue, not just a design concern
Let’s start with a scenario most device teams will recognise. A nurse in an ICU needs to access a connected infusion pump using a unique login, which adds 30 to 45 seconds to the task. In a ward where patients require close monitoring and frequent assessment, a nurse might interact with that device dozens of times per shift, compounding those seconds into real clinical friction.
The result is shared credentials, passwords written on tape stuck to the back of the device and auto-lock settings disabled by whoever worked out how to do it. These aren’t malicious choices. They’re rational responses to a poorly designed workflow, and each one introduces a genuine security exposure.
Clinicians can spend significant portions of their shifts navigating access issues. When systems become obstacles, staff adapt by using shared logins, which destroys audit trails, and weak passwords across multiple systems offer attackers a wider net.
This is a design problem rather than the result of poor training, a distinction that matters enormously for how manufacturers approach UX. The security posture of a medical device partly depends on its technical architecture. But it is also determined by the gap between how users are supposed to behave and how they actually behave when the system makes the right behaviour too inconvenient.
Common UX decisions that introduce cyber risk
Several design choices appear reasonable in isolation but compound into significant vulnerabilities once the device is in clinical use.
Authentication that fights the workflow
Staff routinely bypass login sequences that are too long, too frequent or incompatible with clinical realities such as gloved hands and time pressure. When authentication impedes care delivery, users find ways to remove the obstacle, and the security it provides disappears with it.
Authentication failures continue to make up a significant share of disclosed medical device vulnerabilities. Along with code defects, they are the most persistent root causes of security failures across a decade of ICS-CERT advisories.
Alarm designs that train users to ignore signals
Poorly gauged alert hierarchies create alarm fatigue. Clinicians bombarded with low-priority warnings stop responding to them with urgency. This pattern extends to security alerts in ways that device teams rarely anticipate; if the interface cries wolf on routine events, users dismiss alerts that matter with the same reflexive indifference they’ve developed for everything else.
Update prompts that get declined
Firmware updates and security patches require downtime. If the device UX makes update processes unclear, clinical staff will decline them and assume someone else will handle it later. Devices may then run on outdated software for months or even years.
In 2024, unpatched infusion pumps still accounted for over 70% of devices across surveyed hospitals. This is not due to negligence. It’s the result of update workflows that were not designed to be easily completed in a clinical environment.
Default credentials left in place
People do not change factory-default usernames and passwords in devices when updating them takes too much time. If the UX doesn’t require credential configuration as a first step in the setup process, most operators won’t complete it, and the device will remain on its default settings for its entire service life.
Shared accounts built into the workflow
Many devices are designed with a single departmental account, and role-based access, while technically possible, is never set up because the process is too complex. The device then operates with a single point of credential failure for every user on every shift, indefinitely.
In each of these cases, the vulnerability is not a firmware bug or a network misconfiguration; it’s a design decision that didn’t account for how real people use devices under real conditions. No amount of penetration testing will bring such issues to light.
The commercial and regulatory consequences
The commercial consequences of getting UX-driven security wrong are growing, and the regulatory environment is actively closing off the escape routes that manufacturers previously relied on.
The Food and Drug Administration’s (FDA) updated 2025 cybersecurity guidance establishes cybersecurity as an independent regulatory standard, separate from general safety and effectiveness. Failure to meet cybersecurity requirements can result in the denial of market authorisation. That’s a material shift in the regulatory landscape: where security was once an element of broader safety documentation, it is now a gatekeeping criterion.
Under the 2025 guidance, the FDA may determine a device is not substantially equivalent if it has increased cyber risks compared to a predicate. Medical device cybersecurity cannot be implemented in isolation. It is a core technical quality and regulatory requirement that must be woven into every phase of the product life cycle.
In the UK, the Medicines and Healthcare products Regulatory Agency is heading in the same direction. Cybersecurity guidance for Software as a Medical Device is now a formal regulatory priority, and post-market surveillance requirements that came into force in 2025 are placing greater scrutiny on how manufacturers monitor and respond to security issues in the field.
On the procurement side, the shift is clear:
- 83% of healthcare organisations now integrate cybersecurity standards directly into medical device requests for proposals
- NHS procurement teams are asking harder questions than they did three years ago. A device that cannot demonstrate security-by-design will no longer win the contract, regardless of its clinical merits.
The downstream risk of security failure is product recall. Medical device recalls were up 8.6% in 2024, and the reputational exposure that comes with a security-related withdrawal leads to costs well beyond the immediate financial impact.
What leadership should be asking
If you are a founder, chief technology officer or product lead at a medical device company, these are the questions worth raising with your development team before you reach verification and validation:
Has the authentication flow been tested against clinical behaviour, not just technical specification?
A login system that passes a lab test is not the same as one that survives a busy shift in an A&E department. Usability testing under realistic conditions, with real clinical users, is the only reliable way to find out whether the security mechanisms you’ve built will be used in practice.
Are security features visible and intuitive, or buried in the admin layer?
If changing a default password, configuring role-based access or reviewing audit logs requires a qualified IT engineer and a PDF manual, most operators won’t do it. Security configuration needs to be part of the device’s core UX, not a footnote in technical documentation that most users will never read.
Have we designed for the update life cycle, not just the launch state?
The security profile of your device on day one is not the same as its security profile on day 730, and patch deployment needs to be fast, clear and low friction enough to work in clinical environments. If the update process is unclear, it simply won’t happen.
Where does our product design team sit relative to our firmware and security teams?
This is the structural question that underpins all the others. The development of medical devices requires a multifaceted approach that addresses usability and security together. When these disciplines operate in distinct workstreams with separate deadlines and sign-off processes, the gaps create vulnerabilities.
What happens when a user does the wrong thing?
Good security design assumes the user will sometimes behave unexpectedly, and the device should be resilient to that. If your security model depends on every user doing the right thing every time, it isn’t really a security model at all.
Secure medical devices are designed that way from day one
The core principle here is straightforward. A device is only as secure as the behaviour it induces in the people who use it. If the interface makes security hard, users will bypass it without a second thought.
This is not a problem that cybersecurity professionals or UX designers can solve alone. It requires product design, firmware engineering and security experts to work from the same brief and be aware of the same constraints from the first day of development.
At 3Point1, we bring medical device product design, electronics and firmware under one roof to avoid these issues. UX decisions that create security exposure almost always emerge from handoffs, a design team that didn’t communicate with the firmware team or a security review that happened after the interface was already locked. Closing the gaps is a structural question as much as a technical one, and it’s one that needs to be answered during development, not compliance.
For the security layer to hold up in clinical practice, the organisations best placed to help are those that understand what the device does and how real people will use it under real pressure. Speak to an expert from Cyber Alchemy to learn how embedded security thinking, applied at the product level and integrated into the development process, closes the distance between the device you designed and the device your users actually operate.
The goal isn’t a device that is secure when it’s isolated from external networks. It’s a device with which behaving securely is simply the path of least resistance.
3Point1 is a UK-based electronic design consultancy specialising in product design, firmware development and printed circuit board design for regulated industries. This article was written in partnership with Cyber Alchemy, a cybersecurity consultancy helping medical technology companies build security into their products from day one.