Artificial intelligence (AI) has become more prominent in all facets of life, including health care. The Oxford Dictionary defines AI as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” The tools created through AI technology can use learning techniques, data sets, and content—such as audio, images, and text—to generate information for the user.
There are many tools available for the public and for healthcare professionals to guide them in their decision making. For health care specifically, AI may be helpful for transcribing and recording patient information, documenting, and sourcing information for patient management. However, regulated members should critically think and consider the following factors before incorporating AI into any part of their practice.
Privacy
For AI to provide useful information, it relies on being fed data. Much of this data (and what the tool learns from it) is gathered from the input of others. In other words, when an individual enters data into an AI-generated tool, that person’s data is teaching the AI tool what to say to the next inquirer. This is important for regulated members to consider when entering patient data into an AI tool. There is no guarantee this data will be deleted afterwards and not used to help the machine learn so it may help the next user.
Advice
- Don’t enter personal information into an AI tool until you have determined where and how the information is being stored, and what information may be provided to other users.
- Any information collected from patients must remain secure and confidential throughout the entire patient care process, and it is the responsibility of regulated member to ensure this.
Clinical judgement
Evidence-informed practice is important for critical evaluations throughout pharmacy practice. While AI tools are readily available and may be helpful, regulated members must critically assess and evaluate any information used to make patient care decisions.
AI tools also scan websites and other resources, but there is a lack of transparency about where this additional data comes from and whether it is accurate or reliable. This “black box effect” means you cannot be certain about where an AI tool is pulling its data from, and this can introduce bias in results. If you are unaware of how an AI tool is taught, you may not be able to combat that bias before it influences clinical decision making.
Advice
- Any information provided by AI should be used to inform your decision making—it cannot make a decision for you.
- Make an effort to avoid confirmation bias (interpreting information to confirm something that you already believe to be true).
- Information gleaned from AI is not necessarily applicable to all patients.
Consent
Informed consent is integral to the practices of regulated members and requires dialogue with patients (See Appendix A of the new SPPPT and the Code of Ethics for more). To receive informed consent, it is important that regulated members and patients understand the possible consequences of using AI technology. This helps make informed decisions about how a patient’s information is collected and used.
Advice
- Before entering a conversation with a patient, ask yourself the following: Do I understand the risks and benefits of the technology enough to be able to accurately communicate it to patients? (e.g., being able to describe how the data is stored and how it is used.)
- Any time an AI tool is considered, inform patients that AI technology will be used and for what purpose.
- Inform the patient to whom and for what purpose their personal information will be disclosed, unless otherwise authorized by law (Principle 4.3 of the Code of Ethics).
Responsibility
Pharmacists and pharmacy technicians have many responsibilities to comply with legislation, standards, and the Code of Ethics. These responsibilities apply when using an AI tool in practice. AI is constantly changing and evolving, but any consequences that may result from its use by a regulated member are the responsibility of the regulated member.
Advice
- Before choosing to use AI in your practice, consider the risks and communicate them with colleagues and patients.
- Pharmacists must be aware of any conditions that could compromise independence of clinical judgements and decisions which may pose risks to patients and their care (new SPPPT Standard 1.13).
- Critically evaluate information provided from open-source AI tools.
Remember, AI tools may be helpful for pharmacists and pharmacy technicians when making decisions, but they do not replace the requirement that you must make your own decisions.
References
- Security Implications of AI Chatbots in Health Care – PMC (nih.gov)
- Artificial Intelligence and the Practice of Medicine (June 2022) – FMRAC
- AP_Artificial-Intelligence.pdf (cpsa.ca)
- Artificial Intelligence in Healthcare – The Need for Ethics | Varoon Mathur | TEDxUBC (youtube.com)
- The future of pharmacy: How AI is revolutionizing the industry – ScienceDirect
- Privacy policy | OpenAI