According to the World Economic Forum, hospitals produce around 50 petabytes of data per year. And with 6,039 hospitals in the US alone, that amounts to a sizeable amount of data requiring secure storage. Data consists not only of confidential patient medical records but also of operational data retained by US hospitals such as personal and financial information.
This ever-increasing amount of patient data and growing risks associated with its loss, mean the stakes have therefore never been higher for hospitals. The need to store and manage their data in a way that is sustainable, cost-effective and secure is therefore ever-present.
Hospitals are sitting ducks. Research by Sophos highlighted that hospitals are more likely to be targeted by ransomware attacks, less likely to be able to prevent such attacks, and less likely to backup their data. This means they are much more likely to have to foot the bill for some eye-watering recovery costs to rectify the situation. So it is not just the cost of a potential ransomware payment itself that needs to be taken into account.
The far greater financial expense is actually the cost of downtime, network and device costs, and the number of man-hours spent on data and system restoration. Indeed the larger medical centers are often required to fork out millions for such remedies – even if the encrypted data can be recovered without paying a ransom.
In fact, the AAMC Research and Action Institute calculated that when the University of Vermont Medical Center was hit by a ransomware attack in 2020, it cost $50 million in lost revenue alone. Meanwhile, electronic health records (EHRs), payroll, and other critical applications experienced weeks of downtime.
Modern security threats require modern data storage strategies. With the sector accounting for 79% of all reported breaches in 2020 (a 45% increase on the previous year), and the attacks themselves becoming more dangerous, the need to backup and protect patient data with modern storage solutions has never been more acute. Research by Sophos showed that across all sectors, 57% of organizations whose data was encrypted were able to restore their data from backups – however, this drops to just 44% in healthcare. So even if a ransom is paid, hospitals are still unlikely to be able to retrieve all of their data, due to inadequate storage strategies.
The role of immutable backups in protecting against ransomware. There are backups and backups, however; with immutable backups being the strongest risk management play out there. These are essentially backup files that can’t be altered in any way, and can be deployed immediately to servers in the event of ransomware attacks or critical system failures that may also bring about the loss of sensitive personal data and patient records. Modern object storage and immutable backups are therefore needed to manage these risks.
How immutable backups work. An immutable backup is basically a tier data backup that can’t be deleted or modified for a set period of time, typically held on-premise, at an off-site storage facility, or in the cloud. It differs from data replication (where backups are continuously overwritten, therefore with the potential to overwrite healthy data with encrypted files in the event of a ransomware attack). Immutable object storage makes encryption impossible and therefore offers a much higher level of data protection.
In order for hospitals to successfully leverage immutable object storage, however, a formal strategy is needed to ensure adequate data protection, risk management, and cost control. Here are some key considerations.
1. Plan for growth to keep costs under control
There has been an evolution in storage from file storage, to block storage, to object storage. While it may be tempting to look at object storage in a similar way to other forms of storage and seek to move it to the cloud, the public cloud can be inflexible and the costs are difficult to manage for the large data sets that object storage so successfully harnesses. So hospitals will need to explore solutions that are not only scalable but also affordable, to avoid creeping costs.
Furthermore, managing these workloads optimally across different cloud environments becomes increasingly challenging, meaning the benefits of standardization on a single platform are lost. Hence object storage and immutable backups are therefore much more likely to be held in on-site facilities or in the private cloud.
2. Use cross-site replication for better security
One of the great things about object storage is that it is possible to copy data across multiple sites and locations. Data can easily be replicated within nodes and clusters among distributed data centers for additional backup on-site, off-site, or even across geographies. The flip side of this however is the need to ensure that complex storage environments are not more vulnerable to attack or slower to react to a server failure and get systems back up and running.
Cross-site object storage applications, therefore, need to be [adequately integrated?] in order to immediately switch from a failed server to a redundant server, in the event of a system failure to avoid disruption and data loss. This is critical to ensure business continuity in the event of a critical incident by ensuring that data from immutable backups are immediately diverted to the end user as required. So in the event of a ransomware attack, immediate retrieval of immutable backups, held in multiple locations, offers maximum protection and system redundancy.
3. Think access control to ensure data security and protection
Like any system, object storage applications need to have safeguards in place to ensure against malicious or inadvertent configurations by users that manage and access that data. Access control offers an important degree of protection, meaning that any user interacting with the object storage is authenticated and authorized to perform the requested action. Unlike ‘hot’ data storage such as file storage which is used for active or ‘live’ data, object storage is more frequently used for archiving or data backups in what are called data ‘buckets’, which may not be in use by the majority of clinicians and support staff on a day-to-day basis. This in itself reduces the risk of the end user inadvertently clicking on a link that opens the door to ransomware, but even with an access policy, authorized users are still able to potentially alter the object store, or leave it vulnerable to alteration at a later date. Again, this is another reason why holding object storage and immutable backups on-site may be preferable to the public cloud, where different cloud providers have more flexible and complex data management and access use cases. An overarching access control policy for the object store is therefore advisable to offer further protection and the ability to tailor the approved system configurations.
Access control policies outline the restrictions imposed on users during the creation, use, or deletion of data, hence preventing users from potentially opening up public access to the object store. Firewall configurations can additionally be put in place to ensure access requests are only approved when they come from the hospital’s own private cloud.
Object storage and immutable backups are essential components of secure, agile healthcare IT infrastructure. But it’s a responsibility that needs to be taken seriously, and architects need to continuously adapt to evolving threats. The crucial need for object storage and immutable backups in preventing or recovering from a ransomware attack cannot be overstated, nor can the need for integrated, multisite redundancy and failover, resulting in immediate data recovery, and a continuation of the provision of patient care.
About James Loveday
James Loveday is a Healthcare Specialist and #ADCHero at Loadbalancer.org, guardians of uptime, and experts at load balancing object storage applications, using clever, not complex, load balancers that put hospital IT teams in control. Find out how they keep hospitals flowing here.