Information for Physicians and Advanced Practice Clinicians

Why AI?

Following our interview with Dr. Ashwin Prakash, MD, PhD, MBA in the previous issue, we received a number of questions about how Artificial Intelligence may be used in CHS hospitals.

1. What exactly is AI and what does it have to do with bedside medicine?

AI, or artificial intelligence, refers to the simulation of human intelligence processes by machines, especially computer systems. In healthcare, AI has the potential to transform patient care by automating routine tasks, providing clinical decision support, and enabling personalized medicine.

Machine Learning, a subfield of AI, allows computers to learn from data without being explicitly programmed. This makes it possible to develop algorithms that can identify patterns in patient data that are too complex for humans to detect, which can lead to earlier and more accurate diagnosis, better treatment plans, and improved patient outcomes.

For example, AI algorithms have been shown to be able to identify skin cancer with the same accuracy as a dermatologist. They have also been used to develop new drugs and to predict patient outcomes.

As AI rapidly evolves, it is likely to play an even more important role in bedside medicine, providing physicians with powerful tools for diagnosing and treating patients. This could lead to a new era of personalized medicine, in which each patient’s treatment plan is highly tailored to their individual needs.

AI’s ability to analyze large amounts of data quickly and improve its performance over time make it suitable for many healthcare tasks, including:

  • Virtual assistants: AI-powered virtual assistants can help physicians with documentation, scheduling, and other administrative tasks, freeing up more time for patient care.
  • Automated data analysis and documentation: AI algorithms can analyze large volumes of patient data, including medical records, lab results, and imaging studies. This can help physicians quickly identify relevant information and make more informed decisions.
  • Clinical decision support: AI systems can provide physicians with real time recommendations based on the latest clinical guidelines and best practices. This can help physicians stay up-to-date with the latest medical knowledge.
  • Personalized medicine: AI can be used to analyze individual patient data to identify patterns and predict outcomes. This can help physicians tailor treatments to each patient’s unique needs.
  • Virtual patient engagement: AI-powered tools can help with a variety of tasks, such as scheduling appointments, sending reminders, and managing patient records.
  • Predictive analytics: AI can be used to identify people who are at high risk for certain conditions or complications.

2. What is the blueprint for an AI Bill of Rights and what impact does it have on regulating AI use in healthcare?

The blueprint for an AI Bill of Rights was released by the National Institute of Standards and Technology (NIST) in 2021. It outlines five principles that should be followed when developing and using AI systems:

  • Transparency: Individuals should be informed about the use of AI systems and how they work.
  • Accountability: Individuals should be able to hold those responsible for the development and use of AI systems accountable for their actions.
  • Fairness: AI systems should be fair and impartial, and should not discriminate against individuals or groups.
  • Privacy: Individuals’ personal data should be protected when using AI systems.
  • Human oversight: Humans should be able to oversee the development and use of AI systems, and should be able to intervene when necessary.

The AI Bill of Rights is still in its early stages, but it has the potential to significantly impact the use of AI in healthcare.


3. We have (or will be getting) all of these AI decision support tools. What if I don’t agree with the findings/suggestions?

As a physician, you are ultimately responsible for the decisions you make about your patients’ care.

While AI decision support tools can be helpful in providing you with information and recommendations, these tools are only as good as the data they are trained on, and they may not be applicable to every situation.

Here are some tips for using AI decision support tools effectively:

  • Be aware of limitations and use tools only for tasks that they are designed to perform – do not rely solely on these tools for tasks that require a physician’s clinical judgment.
  • Use the tool to help you gather information and generate hypotheses, but you still must apply your own clinical judgment to diagnosis and treatment options.
  • Be prepared to question the tool’s findings and recommendations. If you do not agree with the tool’s suggestions, you should not hesitate to override them.
  • Be prepared to explain your decisions to your patients and their families. If you have made a decision that is different from the tool’s recommendations, you should be able to explain why you did so.

Clinician oversight of AI tools is central to successful use of this technology in healthcare.


4. What happens to a patient’s protected health information when using AI?

While most people have heard of ChatGPT by now, it was not specifically developed to handle healthcare data and may not have the same level of privacy and security protections as a custom platform. For this reason, CHS is working with Google to develop a proprietary, HIPAA-compliant enterprise platform that can ensure PHI/Privacy requirements are met.


We will continue to provide information about AI in future communications, so if you have questions you’d like to see addressed in future publications, please contact Dr. Prakash at ashwin_prakash@chs.net.