Skip to main content
U.S. flag
An official website of the United States government

Do You Know The Risk?: The Urgent Need for Data Security in Healthcare AI

By Donna Grindle, 405(d) Task Group Ambassador Lead
July 10, 2024

In the face of evolving cyber threats, the healthcare sector stands at a pivotal juncture. In the dynamic and fast-paced world of healthcare, embracing the use of Artificial Intelligence (AI) marks a pioneering shift towards enhanced patient care and streamlined operations. AI’s potential to deliver personalized treatments and leverage predictive analytics is not just transformative; it’s revolutionary. With healthcare institutions relying more on AI to enhance patient care and operational efficiency, the need for robust data protection measures is becoming increasingly crucial. It is essential to create and consistently enforce protocols for assessing data privacy and security issues when choosing and implementing AI technologies as we go, not as an afterthought. Surprisingly, many current AI applications may not have undergone a comprehensive, if any, security evaluation, highlighting the need to address this gap quickly.

The undeniable appeal of artificial intelligence lies in its wide range of benefits, spanning from predictive analytics to tailored treatment strategies. Beyond the patient care applications, the potential for its use to streamline and improve healthcare operations is equally exciting. Yet, amidst this excitement for these innovations, the potential risks are sometimes overlooked. Issues concerning data security and ethical decision-making are significant, since unauthorized use of patient information and biased or poisoned data used in these AI algorithms can lead to ethical quandaries and substantial legal and financial consequences. Neglecting cybersecurity measures goes beyond mere business repercussions. It can undermine the trust we rely on between patients and healthcare providers to provide effective patient care and protect patient safety.

It is essential for organizations to create a detailed blueprint now to seamlessly and securely incorporate AI into various aspects of healthcare, including patient care, cybersecurity, business operations, research and development, finances, and human resources. Such integration is crucial for the present and future of all entities in the sector. Establishing thorough guidelines and protocols will enable organizations to responsibly and effectively utilize AI tools, ultimately improving healthcare services and operational productivity while protecting confidential data and patient safety. Of course, this plan must be expected to constantly evolve as innovation provides more exciting opportunities that are also accompanied by additional risk concerns.

The rate of advancement and investment in AI technologies heightens the importance of creating and following your established guidelines as soon as possible. If you don’t have one already, don’t delay. You probably have some types of AI technologies already in use within your ecosystem. Your vendors may even be providing them as a new feature that your users have already embraced. Here are a few points to consider including in your AI management plan.

Considerations for your blueprint:

Ethical Framework and Governance

  • Establish a governance structure for AI oversight.

  • Consider input from all areas of the organization including legal, financials, HR, business operations, IT, cybersecurity, vendors as well as all clinical positions and even patients.

  • Prioritize the transparency of AI systems to stakeholders.

  • Ensure AI decision-making processes are explainable and understandable.

  • Define clearly what your organization determines will fall under your definition of artificial intelligence technologies and their use.

  • Require evaluation, similar to new solutions, of AI tools added as a new “feature” for currently implemented technology.

  • Define a non-negotiable line for approval requirements for AI implementation, not only by staff but also by vendors, that will be used to access your systems and data. For example, new AI cybersecurity tools or revenue cycle data analysis.

  • Define clear lines on the use of generative AI tools such as ChatGPT for use in any job role for any reason.

  • Document AI use decisions and reasoning along with those providing input. This information is often needed much later when someone asks “Why are we doing this?”

EducationRisk EvaluationOngoing Monitoring and Evaluation
  • Educate your team in charge of making these plans on how to define and recognize the different types of AI technologies, their advantages and the risks they bring to your organization.
  • Train all staff, and vendors if needed, to recognize when they are considering or using artificial intelligence technologies.
  • Train all staff and vendors on your policies and procedures and the potential pitfalls that come with the wonderful advances AI brings to your ability to deliver care and services.
  • Document the training done in all cases as you would other security awareness training including date, time, attendees, and content.
  • Assess all potential legal and regulatory risks associated with AI that will be allowed under governance definitions.
  • Review and adjust Business Associate Agreements (BAAs) for AI data handling.
  • Implement data privacy and security protocols specifically associated with AI tools.
  • Develop a custom security risk analysis and assessment for use with AI applications.
  • Ensure compliance with regulations as they continue to evolve to address AI.
  • Document every review, what was considered and who was involved.
  • Document regular reviews of your AI risk management plans which may need to be more frequent than the overall risk management plans due to constant changing opportunities.
  • Regularly monitor the use of new AI tools that have not been previously evaluated or have not been clearly defined for acceptable use.
  • Regularly monitor for unapproved use of AI technologies.
  • Regularly review AI tools for new features, ongoing performance and compliance with standards.
  • Adjust AI usage protocols based on evolving challenges and technologies with input from defined roles in the organization.
  • Document this activity and findings for your records and future reference. The rapid deployment of AI in healthcare is a testament to the sector’s innovation and commitment to improving patient safety.

The rapid deployment of AI in healthcare is a testament to the sector’s innovation and commitment to improving patient safety and care. However, the integration of these technologies cannot be at the expense of cybersecurity and ethical integrity. As large enterprises along with small and medium businesses explore AI solutions, it is imperative to incorporate cybersecurity reviews into the planning phase. By doing so, we can safeguard sensitive information, ensure regulatory compliance, and uphold the trust that is fundamental to patient safety and care.

When we hear terms like “Copernican revolution” used in reference to the transformative impact AI will have on healthcare, it certainly captures the magnitude of change we anticipate. We should not allow our enthusiasm to overshadow the ultimate goal of improving patient safety and care through the secure and safe use of these new technologies. All of our actions, or inactions, in this moment can have lasting consequences to real people. Building a strategic plan now will allow organizations to embrace the moment but also remember such a paradigm shift always comes with risks that must be addressed along the way.