Healthcare IoT

HealthOps: The Next Frontier of Healthcare Management

6 min read

This article is the final installment of a three-part series. We've already examined how those traits and trends converged around the principle of agility. Now, we'll discuss how the push for greater agility has given way to a broader "DevOps-ification".

In the first installment, we specifically looked at the different ages of technology that reigned from the turn of the 20th century until today; while the second installment addressed the different business strategies and methodologies that have grown out of those technological ages to shape modern day business.

These mutually entangled evolutions – of technologies and business strategies – have pushed and pulled each other, exerting influence each on the other as both tumbled forward through space and time. The result is that both share common traits and trends.

The Arrival of DevOps

Bigger data, more automation, greater collaboration, and increased responsiveness have culminated in the “DevOps“ phenomenon. A portmanteau of “development” and “operations”, DevOps emerged as the successor to agile methodologies and a hot trend among software companies looking to get better results and more harmonious interactions from their people, processes, and technologies.

DevOps draws on all the previous waves of innovation, requiring a lean, agile business environment in which roles and responsibilities are fluid, departments operate with shared responsibility and data flows freely between them.

DevOps is often thought of as a sort of playbook containing specific software lifecycle management techniques, such as:

  • Continuous integration: Software changes are regularly branched and merged into the main code repository. This removes the need for system downtime, decreases the likelihood of work getting lost, prevents version drift, and avoids confusions around the change validation process.
  • Continuous delivery: Businesses change, test, and release their products on a continual basis, leading to faster time-to-market.
  • Continuous deployment: Software changes are released to the public without any human assessment. It speeds up product improvement even further and removes the pressure of perfecting software before “release day.”

The truth, however, is that DevOps is not a playbook, it's a philosophy. And it extends well beyond software development and lifecycle management. It offers a holistic and goal-oriented approach to business and process management. At its core are the following principles:

  • Modularization breaks down long processes into component chunks, allowing small teams to each take ownership of their piece and “sprint” to the finish line. This accelerates release cycles, catches and fixes bugs more quickly, gives users more frequent added value, and prevents complex interdependencies from bottlenecking progress. As an added benefit, those chunks – removed from the context of the larger project in which they’re embedded – can be rearranged and repackaged together with other chunks for future applications.
  • Multi-disciplinary collaboration helps each part to better understand its place in the whole and to work so that each individual contribution can serve the whole as best as is possible.
  • Decentralized task distribution and self-management enable more effective task “chunking”, increases organizational agility, and allows team members to work to their strengths.
  • Smart, well-defined management policies and tools allow for all component tasks and processes to be placed within a unifying architecture to seamlessly coordinate and integrate everyone’s individual contributions to advance large complex projects. Continuous integration is one example of how this principle is enacted.
  • Automation of as many workflows as possible increases efficiency and scalability. DevOps endeavors to reduce to an absolute minimum any actions and processes that cannot scale – which is why automation is so important.
  • Continuous iteration and testing deploys different product versions simultaneously and subjects them to the rigors of testing to tease out any bugs, vulnerabilities, or inadequacies as quickly as possible – preventing breaks in value delivery chains and increasing throughput. At the same time, data is being constantly gathered to inform on the process and drive the next iteration.
  • Comprehensive documentation of changes and their effects, ideally collected passively through automated logs, creates a clear audit trail. This makes it easy to identify which of many interconnected actions and changes and iterations caused a problem.
  • Strict role and permission governance protects key areas of businesses from accidental changes or damage in the course of normal operations. It restricts access to the deepest layers of the business ecosystem.

It’s no coincidence that these DevOps principles and practices emerged in the same spacio-temporal context as Internet of Things technologies. They’re two sides to the same coin and in many ways represent the culmination of the past century’s innovation. That coin represents the value of precision, speed, adaptability, forward-looking anticipation and proactive adjustment, data sharing and connectivity, distributed production, scalable processes, and customized solutions.

In other words, round approximations have been replaced with precise calculations, predictive approaches have supplanted reactive ones, silos are fewer and further between, centralized management, systems, and processes are being decentralized, and information is flowing more freely and automatically. Production and management processes are designed more sustainably for repeat-use across more and tighter cycles. Tools are being made more purpose-specific, while people are nurturing more multi-functional skill sets and being given more latitude to use them. 

The Benefits Driving DevOps-ification

Using DevOps methods, organizations are able to identify and correct errors earlier in the product chain, improve efficiencies, remove bottlenecks, accelerate value delivery, improve quality, more effectively align collaboration across complex projects, and prevent problems from spiraling out of control.

Some stats to back up these benefits:


A good case in point can be found in the popular retailer, Target. Target introduced a DevOps business model a few years ago, and it’s become crucial for the company’s success. Target owns 1,800 stores across the US, plus a sizable online presence, so it’s not surprising that it found itself with silos that got in the way of smooth operations. Even data about store locations was scattered across three different systems.

Beginning in 2013, Target began focusing on removing silos and increasing collaboration between IT and development teams. They switched to continuous integration and continuous deployment, and found that it was easier to make small, frequent corrections than to handle a rare but major crisis. By extending DevOps principles to how they approached their app, their customer service channels, and their in-store POS systems, Target saw a sharp drop in customer frustration and a sharp increase in employee satisfaction.

Today, DevOps-ification is affecting healthcare too. Patients are discerning consumers, demanding a treatment experience that parallels the customer service, speed of delivery, and personalization that they get from the likes of Amazon. The only way that healthcare providers can keep up with patient demands is by adopting similarly automatable, collaborative, smart workflows.

The Dawn of HealthOps

For healthcare, it’s only a matter of time until these trends coalesce with IoMT into something bigger. It’s something we could call “HealthOps.” HealthOps would see healthcare providers adopt a working culture similar to DevOps.

The future of healthcare is HealthOps and it promises great things. Among other benefits:

  • Healthcare providers can organize, analyze and mine complex data from disparate sources. Improved analysis and data sharing will enable them to avoid preventable diseases, predict epidemics, increase the accuracy of diagnosis and improve patient care with more agile, personalized treatment.healthops-leveraging-big-data
  • Continuous iteration, testing and integration bring fast, frequent incremental updates. This results in less downtime for critical medical devices, improved speed and performance for health apps, and fewer software and processing errors.
  • Automation, modularization and decentralized processes increase efficiencies and free up your human talent to better innovate, communicate and collaborate while, at the same time, improving healthcare infrastructure.
  • Data collection and policy enforcement mechanisms passively generate the documentation required to demonstrate regulatory requirements. What’s more, automated backups and checks help streamline the process of compliance across multiple environments.

Imagine a patient – let’s call him Joe – had a pacemaker implanted at the local hospital last year because of his heart arrhythmia. The pacemaker sends data about Joe’s medical condition to the medical center’s patient follow-up system.

One day, the pacemaker notes irregularities in Joe’s heartbeat a few times within an hour. The device delivers the shock needed to correct the arrhythmia and sends an alert to the relevant medical center and manager. Joe gets an automated text message informing him about his heart events, which he hadn’t even noticed. The text asks him to click a link to schedule an examination within 24 hours.

At the same time, Joe’s cardiologist and general physician both get alerts about Joe’s heart incident. They examine more data about Joe’s recent general health, nutrition, and exercise levels. This was gathered by Joe's smart watch, the data from which he agreed to share with his healthcare provider and synch to his personal health portal. A team of doctors consult and agree on a personalized treatment plan for Joe. After Joe is briefed on the recommended course of action, he affixes his digital signature to the documents sent to his portal, and his cardiologist remotely adjusts the pacemaker settings via secure application. 

In a healthcare model such as this, not only is a potentially catastrophic medical event avoided, but, using a mix of smart technology and smart processes, it’s preemptively attended to almost entirely outside of the hospital – keeping costs low and resources free.

Security and HealthOps

HealthOps has the potential to fully unleash the promise of smart healthcare. In the scenario above, Joe may well have died without it. But HealthOps could be a nightmare if security is sidelined. Just like its precursor DevOps, HealthOps will need to leverage secure-by-design technologies and processes to thrive.

the-devsecops-imperativeSo much is the success of a DevOps approach tied to security that the underlying philosophy has been expanded and the preferred term is now DevSecOps. DevSecOps is a way of incorporating software development, IT operations, and cybersecurity into a highly connected, continuously integrated system that is protected from hackers and viruses. Without the necessary safeguards, a cyber criminal or virus can rip through a super-connected DevOps-enabled system in no time.

HealthOps requires just as much careful, early planning with security in mind. If you think about what’s at stake – the safety of patients, continuance of care, and the confidentiality of ePHI – the need for strong security capable of spanning and protecting the complex web of interactions running between your technologies, systems, and processes is paramount.

Consider a simple example: many IoMT devices enable remote telemetry, allowing the device to communicate key measurements to staff who might be out of sight of the device. This makes a lot of sense and makes it a lot easier to better monitor more patients. But since so many devices are built on modularized software and hardware frameworks, designed for repeat use and diverse application, they often also come with built-in functionality that is at best unwanted and at worst dangerous. When the digital framework used by a medical device to enable remote telemetry also enables remote control, it can be a big problem. 

As it happens, an example of this sort made headlines not long ago when it was discovered that a syringe pump could be hijacked and remote controlled through the hospital network. Hackers would be able to turn the pump on or off, speed up or slow down the drug delivery rate, silence alarms, and more. That’s just one example, of course, but it does well to demonstrate the threat.


Throughout history, as technology advanced, business practices and organizational methodologies have always rushed to catch up. Recently we’ve seen IoT technologies come online all over the world. We’ve also seen businesses of every sort increasingly adopt DevOps methodologies and principles to boost efficiencies, streamline workflows, enhance collaboration, accelerate production, and improve responsiveness.

With the advent of the Internet of Medical Things, it’s only a matter of time until forward-thinking administrators looking to get the most out of their new technologies follow suit roll out their own take on DevOps.

An automated, collaborative, data-driven, integratively decentralized, and highly responsive HealthOps will be realized in the very near future. The possibilities are practically endless and the future looks to be bright. As a society, we’re on the cusp of fundamentally transforming the way we deliver care, and indeed how we approach health itself. That said, this future is not guaranteed. If the principle of secure-by-design is not built into HealthOps, we’re liable to do more harm than good.

Healthcare providers need to act now to put the necessary security systems into place so that they can make the most of HealthOps.