The Top Ten Mistakes Healthcare Executives Can Make and How to Avoid These

“Some executives think that artificial intelligence is a sentient being, but it is simply a better way to turn data into actionable insights.”

-Ryan Detert, CEO, Influential

Artificial intelligence (AI), with its machine and deep learning, natural language processing, and robotic process automation tools, has become an integral part of many sectors in society and now more than ever in healthcare as well. More than 90% of healthcare executives agree that artificial intelligence improves healthcare, and healthcare AI startups raised close to $1 billion in Q4’2019 alone (Gil Press of Forbes, February 21, 2020). With a coalition of enlightened leaders within a healthcare organization, artificial intelligence enables interdisciplinary collaboration and yields valuable dividends. Here is a list of ten common mistakes that healthcare executives can make and how to avoid them:

 

  1. Thinking that artificial intelligence is too complicated to understand. Some executives feel that AI is perhaps too esoteric (low “explainability”), but an astute AI expert can deconstruct the important concepts to understandable parts and relate these concepts to potential projects. Understanding the basic tenets of AI is essential for any healthcare executive in order to ask pertinent questions of service providers to avoid being deceived.

 

  1. Believing that artificial intelligence is able to solve most if not all problems. It is essential for the healthcare executive to concomitantly understand the capabilities as well as the limitations of artificial intelligence. It is easy to get swept up by the laudable accomplishments in the media and try to transpose those feats to the healthcare system. A solution is to have a few projects in an AI portfolio to balance the expectations and to involve more stakeholders.

 

  1. Over-relying on known brand consultants for advice in artificial intelligence. Larger consulting firms often have polished Powerpoint presentations by their healthcare executives, but often have very little substantive expertise or accountability. The brand consulting firms often overuse AI as a means to convince healthcare executives that AI is the next big thing without true assessment of the organization for what it truly needs.

 

  1. Not recognizing that artificial intelligence adoption is more culture than technology. Adoption of AI in any organization is mostly achieved by ardent champions who engender a transformative culture, and not simply by deployment of technology. The Harvard Business Review article on artificial intelligence (Building the AI-Powered Organization) states that the organization needs to budget as much for integration and education as for the AI technology.

 

  1. Distracted by the attractive technology and not focus on the problem that needs to be solved. It is important to have a “design thinking” mindset for adoption of AI in the healthcare system to solve problems rather than being distracted by the alluring capabilities of the AI tools. Too often sophisticated AI services are purchased by healthcare executives that do not deal directly with solving the problems that need solutions.

 

  1. Thinking that artificial intelligence tools will be “turn-key” and require little support. Most AI projects require a robust effort in data management and curation, and this preparation often takes considerable time and manpower. There are often IT and/or data structure and architecture deficiencies that need to be reconciled by the healthcare executives prior to full deployment of AI in order to sustain a return on investment.

 

  1. Not having enough patience to wait for the longer term dividends. Some of the AI tools take time and investment to deploy and the financial return is sometimes not immediate. A longer time horizon is necessary for some of these AI projects so perseverance and trust are necessary. As Bill Gates famously stated (about new technology): “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

 

  1. Believing that artificial intelligence will displace humans from their jobs. While some aspects of some jobs can be replaced by automation and other AI methodologies, executives who are knowledgeable in artificial intelligence (“up-skilling”) will perhaps replace those who are not at all cognizant of tools in this domain. Certain human capabilities such as intuition, creativity, and ethics are not yet capable by machines so most executives need not to fear being replaced.

 

  1. Convinced that artificial intelligence is hyped and its role in health care is transient. While some of the hype exaggerates the effectiveness of AI in promulgating a value proposition, other AI tools (particularly in medical imaging and workflow efficiency) have already gained significant traction and are deployed with significant value dividends in both higher performance as well as cost reduction. AI education for the healthcare executive is vital.

 

  1. Artificial intelligence projects are expensive especially since return on investment is uncertain. Certain AI projects, especially those that utilize robotic process automation, can yield financial dividends relatively quickly. In addition, there are data scientists in startups or local universities who are very willing to work on simple AI projects with low compensation (or even free), and this arrangement can be an inexpensive initiation into the AI domain with little financial risk.

 

AI is a new lens that we can put on to see novel solutions to solve the myriad of problems we have in clinical medicine and health care. In short, AI is a valuable, once-in-a-generation resource that all healthcare executives should learn to adopt for health care.

Recommended Posts