Abstract
This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. Veterinary medicine is a socially valued service, which, like human medicine, will likely be significantly affected by AI. Veterinary AI raises some unique ethical issues because of the nature of the client–patient–practitioner relationship, society’s relatively minimal valuation and protection of nonhuman animals and differences in opinion about responsibilities to animal patients and human clients. The paper examines how these distinctive features influence the ethics of AI systems that might benefit clients, veterinarians and animal patients—but also harm them. It offers practical ethical guidance that should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
This paper provides the first comprehensive analysis of ethical issues raised by artificial intelligence (AI) in veterinary medicine for companion animals. AI—i.e. digital systems that perform tasks normally requiring human intelligence (Russell and Norvig 2021)—is poised to transform human medicine (Topol 2019; Wilson et al. 2021) and may prove equally transformative of veterinary medicine (Basran and Appleby 2022; WIRED Brand Lab 2022). Like human medical AI (Astromskė et al. 2021; Dalton-Brown 2020; Keskinbora 2019), veterinary AI raises important ethical issues. Although several papers touch on ethical aspects of veterinary AI (Appleby and Basran 2022; Ezanno et al. 2021; Steagall et al. 2021), including its implications for ‘livestock’(Neethirajan 2021),Footnote 1 a more detailed ethical evaluation of companion animal AI is wanting. Our analysis of AI’s ethical implications for companion animal medicine should interest ethicists, veterinarians, clinic owners, veterinary bodies and regulators, clients, technology developers and AI researchers.
Veterinary practice raises unique ethical issues that stem from the client–patient–practitionerFootnote 2 relationship. Companion animals are potentially more exposed to harms from AI than are humans because they lack the same strong social, moral and legal status. For example, the law does not effectively protect animals from wrongful injury or from clients who seek unwarranted or unjustified ‘euthanasia’ (Favre 2016). These conditions are relevant to the ethics of veterinary AI. At the same time, medical AI raises its own distinctive ethical issues—issues like trust, data security and algorithmic transparency—which we also discuss in the veterinary context.
AI in veterinary medicine might be used for business purposes and hospital logistics like booking appointments. Technology that affects practitioner workflow could have ethical implications, as could other AI, such as language translation apps that enable communication with linguistically diverse clients. However, AI for triage, diagnosis, prognosis and treatment raises the most distinctive, complex and consequential ethical questions. We concentrate on AI for such medical decision-making.
Currently, AI enjoys massive public and private investment, propelled by stories like algorithms defeating Jeopardy and Go masters (Mitchell 2019). Another indication of AI’s rapid ascent are recent large language models like ChatGPT and text-to-image generators that demonstrate remarkable, though sometimes strange and biased, outputs (see Fig. 1). Yet most people are bewildered by the technical jargon of artificial neural networks, deep learning, computer vision, random forests and natural language processing (Waljee and Higgins 2010).Footnote 3 Veterinary practitioners too may not always understand, for instance, the ways in which AI learns from data and autonomously updates its algorithms to draw inferences about previously unencountered data (e.g. from patient radiographs or medical records)—and this may create uncertainty about its use in healthcare.
This issue of trust in technology is important. To some degree, medical AI remains just as much an art as a science (Quinn et al. 2021b), and AI developers are only now exploring how to apply modern machine learning (ML) methods successfully in medicine. This involves experimenting with how data are collected and pre-processed, how AI models are applied and optimised and how model performance is evaluated. Each step contains many nuances that could affect model operation in clinic settings and unintentionally harm patients and clients. While busy practitioners cannot be expected to understand all these nuances, they will increasingly need at least a basic understanding of the ethical risks and benefits of AI. This paper identifies and examines these ethical issues.
The paper runs as follows. Section 2 outlines medical AI in veterinary practice. Section 3 introduces ethical principles of AI, human medicine and veterinary medicine. Section 4 identifies and examines nine ethical issues raised by veterinary AI. Section 5 discusses important ethical norms in veterinary medicine and AI’s distinctive implications in that realm, as well as providing some practical guidance for AI’s use.
2 AI in veterinary medicine
Earlier medical AI involved knowledge-based systems, such as the 1970s program MYCIN (Barnett 1982; Schwartz et al. 1987). These ‘expert’ systems involved hard-coding medical expertise from experts to generate rules and infer clinical diagnoses. However, they struggled with the inherent complexity of medical decision-making (Partridge 1987). Modern ML has proved more adept. These models absorb vast data to ‘learn’ rules automatically in the form of mathematical functions that relate predictor variables to target variables. One very successful type of ML, deep learning, employs so-called ‘deep neural networks’ (DNNs) (Bengio and LeCun 2007). DNNs have layers of processing units linked together in patterns somewhat like brain neurons (Russell and Norvig 2021). There can be hundreds to billions of artificial neurons in DNNs and numerous layers.
AI today often involves ‘supervised’ machine learning in which samples that are used to train models are labelled. For example, an ML system may be trained on thousands or millions of biopsy images labelled as either cancerous or healthy tissue. Once trained, the model can be tested on new images to make predictions (e.g. about cancer) and can then undergo evaluation for diagnostic accuracy and compared with clinician performance. Ideally, the model is subjected to a clinical trial to establish efficacy and cost-effectiveness before being implemented in practice, where its effectiveness should continue to be assessed.
AI shows promise in veterinary medicine. For example, one ML algorithm for detecting canine hyperadrenocorticism had a sensitivity of 96.3% and a specificity of 97.2%, reportedly outperforming other screening methods (Reagan et al. 2020). Some models classify animal cancer, retinal atrophy, or colitis based on images (Zuraw and Aeffner 2021). Deep learning can be applied to detect faecal parasites (Nagamori et al. 2021) or identify canine cardiac enlargement (Li et al. 2020). Some models can outperform veterinary radiologists at certain tasks (Boissady et al. 2020), and others predict seizures in epileptic dogs from ambulatory intracranial sensors (Nejedly et al. 2019). AI might also improve veterinary surgery (Souza et al. 2021) and one day guide robotic veterinary surgeons (Esteva et al. 2019; Panesar et al. 2019). Natural language processing might usefully extract clinical information from patient records for analysis. Finally, there are direct-to-consumer AI products, such as one that predicts differential diagnoses for canine alopecia (Prevett 2019).
Potentially, some AI tools will be more accurate and faster than practitioners and cost-effective for clients. Perhaps, as some suggest, AI will bring “tremendous potential efficiencies and quality improvements in veterinary medicine” (Basran and Appleby 2022). But it also comes with risks and ethical concerns.
3 Principles in AI, medical and veterinary ethics
General AI ethics guidelines speak of ethical principles like transparency, accountability, data security, privacy, safety, fairness and environmental sustainability (Jobin et al. 2019). Many of these principles arise from the distinctive nature of AI and the special risks it creates. As we shall see, such AI ethics principles play a role in the ethics of veterinary AI. AI ethics also borrows from medical ethics (Mittelstadt 2019) and its four widely accepted bioethical principles: nonmaleficence (do no harm), beneficence (do good), respect for autonomy (respect a person’s ability to act on their own values and preferences), and justice (e.g. ensure fair distribution of medical resources) (Beauchamp and Childress 2001).
These medical ethics principles arguably apply in veterinary practice. For example, many would accept that veterinarians have responsibilities to promote patient wellbeing and avoid harming them and to respect the autonomy of clients. However, there are ethically-relevant differences with human medicine that can affect those principles’ application (Desmond 2022). For example, human medical practice is mostly funded by large public or private insurance schemes, whereas veterinary medicine is mainly paid for ‘out of pocket’ by private individuals, who sometimes struggle to afford medical attention for their unwell animals (Springer et al. 2022).Footnote 4 Consequently, some clients (and veterinarians) opt for cheaper and inferior diagnostics and treatment and even sometimes for ‘economic euthanasia’ (Boller et al. 2020).
Obviously, animal patients cannot provide autonomous consent for medical interventions.Footnote 5 Hence, companion animal medicine somewhat resembles paediatric medicine (and to some degree gerontology). Medical practitioners and Boards typically endorse an ethically patient-centred approach (Medical Board of Australia 2020) that prioritises significant patient interests over the interests of other parties like parents (Fleischman 2016). While most parents pursue their children’s best interests, paediatricians may override parental autonomy when parents refuse necessary interventions or urge harmful treatment for their children (Gillam 2016). While they respect parents’ interests, paediatricians see their primary duty as being to the patient.
Veterinary medicine has enjoyed comparatively less discussion—and agreement—about the right ethical principles to follow and how they should be interpreted (Beauchamp and Childress 2001; Desmond 2022). (There is also disagreement about what constitutes wellbeing for animals (Coghlan and Parker 2023). This has important implications for veterinary AI. Nonetheless, in what immediately follows, we can generally assume that clients and practitioners seek the best for the animals and broadly agree on what that involves. Accordingly, veterinary practitioners will broadly follow principles of nonmaleficence (avoid and minimise harm) and beneficence (do good and provide benefit) regarding patients. Furthermore, veterinarians generally respect the autonomy of their clients. These principles inform our identification of nine ethical issues in veterinary AI.
4 Ethical issues raised by veterinary AI
The nine ethical issues we identify and explain below (Table 1) refer to situations that demand ethical judgement about AI. Such deliberation may involve moral values, principles and theories. Later, we will see how these ethical issues variously affect the three parties in the central patient–client–practitioner relationship (and occasionally parties beyond it).
4.1 Accuracy and reliability
Accurate and reliable AI in pathology, radiography, medicine and surgery could significantly benefit patients, including by eliminating certain human biases and misjudgements. Equally, inaccurate AI could harm patients (and clients) through misdiagnoses and poor treatment recommendations. Importantly, some AI tools may be accurate in terms of test set evaluation but unreliable in clinical practice. This may occur when the training and test sets are not representative of the intended real-world use case or contain biases. For example, an AI screening tool trained to recognise pneumonia from the audio of coughs obtained from hospitalised patients may be accurate for patients in hospital but inaccurate for outpatients (Quinn et al. 2021b). And veterinary AI developed in Northern Hemisphere contexts may be less reliable in Southern Hemisphere contexts.
Even when AI is trained and evaluated on representative data and found to be accurate, this may not translate to improved clinical outcomes. Medical AI is frequently not well studied in this respect, despite the surrounding hype (Kim et al. 2019). Although randomized clinical trials are gold-standard in evidence-based medicine, a recent systematic review found that few medical AI studies use randomization and only 9/81 nonrandomized studies were prospective (Nagendran et al. 2020). Some AI is flawed by design. For example, AI purporting to diagnose emotions from photos of human faces has been criticised because expressions do not always correlate with emotional states (Crawford 2021b). This problem may also afflict AI for diagnosing animals’ affective, pain, or welfare states (Jaiswal et al. 2020).
4.2 Overdiagnosis
Overdiagnosis involves diagnosis of conditions that are harmless to the patient (Carter et al. 2015). For example, AI might identify harmless bone defects or ‘incidentalomas’ (Myers 1997). Overdiagnosis is a growing but frequently overlooked concern (McKenzie 2016) which can generate unnecessary additional testing and treatment (Capurro et al. 2022). A significant cause of overdiagnosis is large screening programs of apparently healthy individuals (Woolf and Harris 2012). Veterinary AI might significantly promote overdiagnosis and on a larger scale than before, including by promoting more defensive medicine (Sonal Sekhar and Vyas 2013). Therefore, AI-based overdiagnosis should be recognised and minimised where possible.
4.3 Transparency
Transparency broadly refers to users’ knowledge of how an AI system arrived at its prediction (Castelvecchi 2016). In deep neural networks, the reasons that underlie the model’s prediction can be intrinsically unknowable, due to the model’s enormous complexity. Such algorithmically opaque models are dubbed ‘Blackboxes’. For some AI, a trade-off may arise between model performance and intelligibility. Transparency can also be reduced by for-profit companies that conceal their AI’s workings from users and competitors. Even when AI models are open source and available, busy practitioners may find it too onerous to seek and digest such information.
Some believe that Blackbox AI is not problematic if it is accurate. After all, practitioners justifiably prescribe drugs with largely unknown mechanisms. It is true that use of opaque systems can also sometimes be justified. However, algorithmic opacity can hamper the detection of inaccuracies and biases in predictions. In contrast, interpretable AI can more readily be ‘caught out’ making mistakes, thereby aiding quality assurance and safety. Some therefore argue that medical Blackboxes should be altogether avoided (Rudin 2019), or else used only when equally accurate interpretable systems are unavailable (Quinn et al. 2021a) or when non-transparent systems are demonstrably and significantly superior.
4.4 Data security
Data used to train AI can be private, sensitive and extensive. Data stored locally or on company servers might be leaked, sold on, or hacked. Malicious agents mounting adversarial attacks can even render AI systems unreliable (Kelly et al. 2019), while anonymised health data can sometimes be matched with other data to reidentify individuals (Culnane et al. 2017). Models may suffer ‘attack’ and yield personally identifiable data previously used during training even after data deletion (Carlini et al. 2021). Veterinary-related data are not immune from these risks. Clients may thus have an interest in data security and in providing consent for reuse of their data, e.g., to further train AI tools.
4.5 Trust and distrust
Having trustworthy technology will be important if AI is to be beneficial (Parasuraman and Riley 1997). Unwarranted trust in AI can cause its misuse, while unwarranted distrust can cause disuse that deprives patients of benefits (Jacovi et al. 2021). For example, failure to employ inhouse AI that saves time on external pathology processing could cause critical time delays for sick animals. Distrust in AI by clients, perhaps exacerbated by troubling news stories or personal experiences, may even precipitate more general distrust in the veterinary profession. Distrust can rise for opaque systems (Ferrario et al. 2021), while excessive trust may result from ‘automation bias’ (Goddard et al. 2012). Conversely, humans sometimes wrongly ignore computer-based outputs, especially when outputs are obscure or prone to false alarms. AI companies may heavily promote their wares or even use medical AI to recommend their other products or tests and veterinarians invested in AI could face conflicts of interest. The veterinary profession should be aware of such commercial pressures and tactics that could influence clinical decision-making.
4.6 Autonomy of clients
Respect for the autonomy of human patients standardly requires obtaining their (or their guardians’) informed consent for interventions. This requires giving patients relevant information about the nature, risks and benefits of interventions (Beauchamp 2011). Plausibly, veterinary practitioners should similarly inform their clients of “the advantages, disadvantages and most likely outcomes for each [care] option; the possibilities of favourable and unfavourable outcomes; the likelihood that additional testing or treatment might be needed; the associated costs; and the strength of the supporting evidence” (Brown et al. 2021).
It has been argued that medical practitioners using medical AI should understand and convey its pitfalls to human patients (Geis et al. 2019). Respect for client autonomy at least prima facie requires that veterinarians explain to clients the broad nature, risks and benefits of chosen AI-based interventions, just as they do with other interventions. Furthermore, many clients may be ignorant, misinformed, or uncertain about AI, heightening the need for providing clear information about its pros and cons. For example, practitioners might need to explain how an AI tool can sometimes make misdiagnoses due its training data, or that it has not yet been subjected to rigorous clinical testing.
Veterinarians must normally explain to clients the general basis of their diagnoses and prognoses in ways non-medical people can understand. In Blackbox AI, however, algorithmic opacity precludes client (and practitioner) understanding of the reasons behind the machine’s predictions or recommendations. That may not trouble some clients, but others may prefer transparent AI that provides such explanations (Quinn et al. 2021b).
4.7 Information overload and skill erosion
Some AI might also improve life for veterinarians. Partial outsourcing of cognition to trustworthy AI ‘assistants’ may ease workloads (Basran and Appleby 2022). Yet AI, which is a complex and ever-evolving technology, might also increase information overload for veterinarians who already endure high workplace stresses (Pohl et al. 2022). Not all technologies make our lives easier—consider the way that household appliances have not always reduced domestic labour mostly undertaken by womenFootnote 6 (Cowan 1983). A recent survey found that 70% of medical practitioners believed “digital health technologies will be a challenging burden” and that they lacked “time to learn the value of the technology or foster the belief in their ability to use it…ultimately taking time away from patient care rather than improving it” (Elsevier 2022, pp. 52, 84).
Gradual erosion of medical skills through machine reliance is another theoretical possibility (Mittelstadt and Floridi 2016). Some skill erosion may be overall beneficial, as when generalists refer complex patients to specialists for improved health outcomes (Brown et al. 2021), although that change has sometimes reduced accessibility to healthcare. However, over-reliance on fast and convenient intelligent decision support tools (Kempt et al. 2022) might in time weaken medical skills that veterinarians should retain.
4.8 Responsibility for AI-influenced outcomes
Accountability is an important idea in AI ethics because it can be unclear who is legally and ethically responsible for AI-generated harms. The difficulty of assigning or determining liability is called the’responsibility gap’ (Santoni de Sio and Mecacci 2021). Responsible parties could include engineers, companies, practitioners, professional organisations, regulatory bodies and clinic managers and owners. Until medical AI reaches a very high degree of reliability, there is reason to say that individual practitioners must remain ethically and professionally responsible for using it. This is especially important for non-transparent AI where detection of harmful outputs can be more difficult.
4.9 Environmental effects
Although the environmental effects of healthcare generally (Lenzen et al. 2020), and of AI specifically, are often neglected, these harms can be considerable (Hagendorff 2021). Veterinary AI could contribute to AI’s overall environmental impact (Jones and West 2019). While veterinarians are rightly focused on their immediate patients’ wellbeing, there is a case for becoming more aware of veterinary medicine’s increasing environmental footprint (Koytcheva et al. 2021) and for seeking more sustainable AI tools where possible.
5 Veterinary AI and ethical responsibilities, risks and guidance
5.1 Role and responsibilities of practitioners
As we have shown, AI could have both positive and negative implications for patients, clients and practitioners. In companion animal medicine, the interests of these parties are often aligned: what benefits or harms patients often benefits or harms clients (and sometimes practitioners). Nonetheless, the interests and wishes of clients (and practitioners) and companion animal patients can sometimes conflict (Rosoff et al. 2018; Springer et al. 2021). This raises important ethical questions about veterinarians’ role and responsibilities (Kimera and Mlangwa 2015; Legood 2000; Magalhães-Sant’Ana et al. 2015; Moses 2018; Mullan and Quain 2017; Rollin 2006; Sandøe et al. 2015; Tannenbaum 1991; Yeates and Savulescu 2017; Yeates and Main 2010) and how they relate to AI.
While many veterinarians traditionally saw their primary obligations as being to the ‘owner’ of the animal rather than to the patient themselves (Rollin 2006), this profoundly human-centred view began to shift as societal attitudes to animals evolved and the profession began to appreciate the strength of human-animal relationships (Knesl et al. 2016; Serpell 1996). Nonetheless, veterinarians can still have different understandings of the strengths of their duties—differences which move to the forefront when the interests of patients and the wishes of clients or clinic managers conflict.
Most contemporary veterinarians would broadly claim to be advocates for their patients, yet ‘advocate’ admits of degrees. A strong patient advocate (Coghlan 2018) or ethically patient-centred practitioner is more determined to safeguard the patient’s interests and speak up on their behalf (Hernandez et al. 2018). While the patient-centred practitioner will not ignore clients’ perspectives and situations, such as economic insecurity (Brown et al. 2021), they will search hard for solutions that promote the patient’s important interests and they may sometimes refuse to go along with harmful requests from clients. Like paediatricians (Rollin 2006), patient-centred veterinarians prioritise beneficence and nonmaleficence towards the patient over, say, respect for client autonomy on those key occasions of conflict. They will also seek to safeguard patient interests when they receive pressure from other parties, such as peers or clinic managers, to act counter to their patients’ interests.
A veterinarian’s conception of their role and responsibilities could affect their behaviour toward AI. For example, some practitioners may more readily acquiesce to pressure from clients or clinic managers who are enthusiastic about AI and who urge the adoption of these tools despite the fact that those tools may lack rigorous scientific validation and/or feature an uninterpretable and relatively risky ML model. A patient-centred practitioner would only use veterinary AI in higher stakes situations when they had grounds to believe it would be of overall benefit to the patient.
Another example of how a practitioner’s ethical stance could influence their use of medical AI concerns the important ethical issue of euthanasia (Rollin 2006). Imagine that an AI system designed to make treatment recommendations for animals presents ‘euthanasia’ as an option for a patient who, despite their condition, could probably have a decent life with appropriate treatment. Although such treatment recommendations do not yet feature in AI, it is entirely conceivable that they will appear in some future veterinary AI.
If that happens, it is possible that the client (and veterinarian) could be influenced by an AI recommendation for euthanasia that is not ethically justified. While client-centred practitioners may agree to a client’s request for euthanasia based on an AI recommendation or option, an ethically patient-centred practitioner would strongly counsel the client to reject that aspect of the AI’s recommendation. The converse situation may occur when an AI recommends onerous and futile treatment for a dying patient who would thereby be made much worse-off and so suffer what has been termed ‘dysthanasia’ (Clark and Dudzinski 2013; Quain et al. 2021). If future AI makes treatment recommendations as well as diagnoses, veterinarians will need to be aware of the potential for uncritical acceptance of such advice from machines.
5.2 Distinctive risks associated with veterinary AI
Some risk factors are distinctive or especially salient for veterinary AI and are worth highlighting. First and perhaps most importantly, companion animals, unlike humans, are classed as legal property and enjoy relatively few social and regulatory protections (Sunstein 2003). Moreover, our societies remain profoundly human-centred overall, typically affording little moral consideration to animals compared to humans (Singer 1995). This pronounced ethical anthropocentrism shows itself in the fact that AI ethics has largely neglected nonhuman animals (exceptions include Owe and Baum 2021; Singer and Tse 2022)—both directly as subjects of AI itself and indirectly as subjects of the environmental impacts of AI (Coghlan and Parker 2023).
Consequently, some AI developers and some veterinarians may devote less energy and care than they might to ensuring that AI promotes patients’ interests (and may have less legal impetus to do so). Furthermore, most veterinarians work in small businesses or corporate-run hospitals; this could potentially result in pressure to increase profit and client turnover, which may overtly or subtly affect patient care (Rosoff et al. 2018), such as by promoting unnecessary testing and treatment.
Second, being less regulated than human medicine, veterinary medicine potentially affords more opportunities for experimenting with cutting-edge yet relatively untested treatments. Indeed, one sometimes hears the view that AI might be ‘tested’ on animal patients before being used on human patients. Quain et al. (2021) argue that the freedom to pursue various kinds of advanced but experimental veterinary care, such as stem-cell treatment, can sometimes (though not always) pose extra risks to patients. Misguided, faulty, or insufficiently tested AI also carries risks despite being a promising cutting-edge technology. AI can be used on animal patients without the same testing and regulatory approval (e.g. by the Food and Drug Administration) that human AI requires. Additionally, veterinary medicine has fewer resources for research into medical interventions and devices (Basran and Appleby 2022).
Third, there is currently qualitative and quantitative data scarcity for animals compared to humans for training ML models (Appleby and Basran 2022). Veterinary data records lack the requirements for consistency and standardisation sometimes imposed on human medical data records (Lustgarten et al. 2020). These factors might make it more difficult to develop and deploy effective and reliable ML models. (Note, however, that the relatively minimal legal regulation of animal health records could sometimes improve data access.) Although data scarcity can be overcome through data sharing agreements, such sharing also raises risks for the privacy of medical records.
5.3 Ethical guidance for AI developers, practitioners and veterinary bodies
As we noted, the ways in which practitioners approach AI depends partly on their ethical understanding of their role and responsibilities as veterinarians (as well as on their understanding of AI and level of enthusiasm for it). Let us assume that practitioners and clinic owners and hospital managers,Footnote 7 generally prioritise the interests of patients or act in ethically patient-centred ways. Drawing on the above analysis, we suggest the ethical principles and goals listed in Table 2 for governing AI use in veterinary medicine. Alongside the principles and goals, recommendations and examples are provided.
6 Conclusion
Veterinary medicine is a socially valued profession that, like human medicine, is likely to be significantly affected by AI. In this paper, we showed that veterinary AI creates risks, benefits and ethical issues that are both familiar from human medicine and unique or distinctive. Ethical responses to veterinary AI can be influenced by views about practitioner roles and responsibilities. In general, contemporary veterinarians aim to practice nonmaleficence and beneficence towards patients and to respect client autonomy. However, these principles may be differently interpreted. For example, a strongly patient-centred practitioner who prioritises patients’ vital interests may refuse to use insufficiently tested or excessively risky medical AI even when clients or clinic owners or managers improperly demand it. Equally, the patient-centred practitioner might persuade uncertain or sceptical clients that sufficiently validated and trialled AI tools can significantly benefit patients.
To provide guidance on using veterinary AI, we identified the following principles and goals: nonmaleficence, beneficence, transparency, respect for client autonomy, data privacy, feasibility, accountability and environmental sustainability (Table 2). We strongly recommend that the veterinary profession not allow AI developers, AI companies and insurance providers to dictate the design and uses of AI without proper consideration of relevant concerns, risks and ethical values. Awareness of commercial overhyping of AI and potential exploitation of animals and clients would be wise. Ongoing conversations may need to occur between practitioners, veterinary organisations, insurance companies, AI vendors and AI experts that address the ethical issues we identified (Table 1). Finally, as veterinary AI progresses, veterinarians may need education about the ethical issues it raises so that they can adequately protect and benefit their animal patients and human clients. Such education may need to begin at university (Quinn and Coghlan 2021) and extend into continuing professional education.
Data availability
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Notes
Compared to medical practitioners, veterinary practitioners are typically less directly confronted by the principle of justice. Nonetheless, there are ethical questions about the need for public funding of veterinary medicine and fairness for poorer clients and their animals (Mullan and Quain 2017). Expensive but beneficial AI for animals could conceivably raise justice issues.
Although animals might provide or withhold assent (Kantin and Wendler 2015).
Thanks to a reviewer for this example.
Who need not be veterinary practitioners or nurses.
References
Appleby RB, Basran PS (2022) Artificial intelligence in veterinary medicine. J Am Vet Med Assoc 260(8):819–824. https://doi.org/10.2460/javma.22.03.0093
Astromskė K, Peičius E, Astromskis P (2021) Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI Soc 36(2):509–520. https://doi.org/10.1007/s00146-020-01008-9
AVMA (2018a) U.S. veterinarians 2018a. American Veterinary Medical Association. https://www.avma.org/resources-tools/reports-statistics/market-research-statistics-us-veterinarians-2018a. Accessed 15 July 2022
Barnett G (1982) The computer and clinical judgment. N Engl J Med 307(8):493–494
Basran PS, Appleby RB (2022) The unmet potential of artificial intelligence in veterinary medicine. Am J Vet Res 83(5):385–392. https://doi.org/10.2460/ajvr.22.03.0038
Beauchamp TL (2011) Informed consent: its history, meaning, and present challenges. Camb Q Healthc Ethics 20(4):515–523. https://doi.org/10.1017/S0963180111000259
Beauchamp TL, Childress JF (2001) Principles of biomedical ethics. Oxford University Press, New York. https://doi.org/10.1136/jme.28.5.332-a
Bengio Y, LeCun Y (2007) Scaling learning algorithms towards AI. Large-Scale Kernel Mach 34(5):1–41
Boissady E, de La Comble A, Zhu X, Hespel A (2020) Artificial intelligence evaluating primary thoracic lesions has an overall lower error rate compared to veterinarians or veterinarians in conjunction with the artificial intelligence. Vet Radiol Ultrasound 61(6):619–627. https://doi.org/10.1111/vru.12912
Boller M, Nemanic TS, Anthonisz JD, Awad M, Selinger J, Boller EM, Stevenson MA (2020) The effect of pet insurance on presurgical euthanasia of dogs with gastric dilatation-volvulus: a novel approach to quantifying economic euthanasia in veterinary emergency medicine. Front Vet Sci. https://doi.org/10.3389/fvets.2020.590615
Brown CR, Garrett LD, Gilles WK, Houlihan KE, McCobb E, Pailler S et al (2021) Spectrum of care: more than treatment options. J Am Vet Med Assoc 259(7):712–717. https://doi.org/10.2460/javma.259.7.712
Capurro D, Coghlan S, Pires DEV (2022) Preventing digital overdiagnosis. JAMA 327(6):525–526. https://doi.org/10.1001/jama.2021.22969
Capurro D, Velloso E (2021) Dark patterns, electronic medical records, and the opioid epidemic. arXiv. http://arxiv.org/abs/2105.08870. Accessed 5 August 2022
Carlini N, Tramer F, Wallace E, Jagielski M, Herbert-Voss A, Lee K et al (2021) Extracting training data from large language models. In 30th USENIX security symposium (USENIX Security 21), pp 2633–2650
Carter SM, Rogers W, Heath I, Degeling C, Doust J, Barratt A (2015) The challenge of overdiagnosis begins with its definition. BMJ 350(2):h869. https://doi.org/10.1136/bmj.h869
Castelvecchi D (2016) Can we open the black box of AI? Nat News 538(7623):20
Clark JD, Dudzinski DM (2013) The culture of dysthanasia: attempting CPR in terminally ill children. Pediatrics 131(3):572–580
Coghlan S (2018) Strong patient advocacy and the fundamental ethical role of veterinarians. J Agric Environ Ethics 31(3):349–367. https://doi.org/10.1007/s10806-018-9729-4
Coghlan S, Parker C (2023) Harm to nonhuman animals from AI: a systematic account and framework. Philos Technol 36(2):25. https://doi.org/10.1007/s13347-023-00627-6
Cowan RS (1983) More work for mother. Pantheon Books, New York
Crawford K (2021a) Atlas of AI. Yale University Press, New Haven and London
Crawford K (2021b) Artificial intelligence is misreading human emotion. The Atlantic. https://www.theatlantic.com/technology/archive/2021b/04/artificial-intelligence-misreading-human-emotion/618696/. Accessed 29 April 2022
Culnane C, Rubinstein BI, Teague V (2017) Health data in an open world. arXiv preprint arXiv:1712.05627
Dalton-Brown S (2020) The ethics of medical AI and the physician–patient relationship. Camb Q Healthc Ethics 29(1):115–121. https://doi.org/10.1017/S0963180119000847
Desmond J (2022) Medicine, value, and knowledge in the veterinary clinic: questions for and from medical anthropology and the medical humanities. Front Vet Sci. https://doi.org/10.3389/fvets.2022.780482
Elsevier (2022) Clinician of the future: a 2022 report. https://www.elsevier.com/connect/clinician-of-the-future. Accessed 1 August 2022
Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K et al (2019) A guide to deep learning in healthcare. Nat Med 25(1):24–29
Ezanno P, Picault S, Beaunée G, Bailly X, Muñoz F, Duboz R et al (2021) Research perspectives on animal health in the era of artificial intelligence. Vet Res 52(1):40. https://doi.org/10.1186/s13567-021-00902-4
Favre D (2016) An international treaty for animal welfare. In: Cao D, White S (eds) Animal law and welfare—international perspectives. Springer International Publishing, Cham, pp 87–106
Ferrario A, Loi M, Viganò E (2021) Trust does not need to be human: it is possible to trust medical AI. J Med Ethics 47(6):437–438. https://doi.org/10.1136/medethics-2020-106922
Fleischman AR (2016) Pediatric ethics: protecting the interests of children. Oxford University Press, New York
Geis JR, Brady A, Wu CC, Spencer J, Ranschaert E, Jaremko JL et al (2019) Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Insights Imaging. https://doi.org/10.1186/s13244-019-0785-8
Gillam L (2016) The zone of parental discretion: an ethical tool for dealing with disagreement between parents and doctors about medical treatment for a child. Clin Ethics 11(1):1–8. https://doi.org/10.1177/1477750915622033
Goddard K, Roudsari A, Wyatt JC (2012) Automation bias: a systematic review of frequency, effect mediators, and mitigators. J Am Med Inf Assoc 19(1):121–127
Hagendorff T (2021) Blind spots in AI ethics. AI Ethics. https://doi.org/10.1007/s43681-021-00122-8
Hernandez E, Fawcett A, Brouwer E, Rau J, Turner PV (2018) Speaking up: veterinary ethical responsibilities and animal welfare issues in everyday practice. Animals 8(1):15
Jacovi A, Marasović A, Miller T, Goldberg Y (2021) Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. Association for Computing Machinery, New York, NY, USA, pp 624–635
Jaiswal A, Raju AK, Deb S (2020) Facial emotion detection using deep learning. In: 2020 International conference for emerging technology (INCET). IEEE, pp 1–5. https://doi.org/10.1109/INCET49848.2020.9154121
Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1(9):389–399. https://doi.org/10.1038/s42256-019-0088-2
Jones RS, West E (2019) Environmental sustainability in veterinary anaesthesia. Vet Anaesth Analg 46(4):409–420
Kantin H, Wendler D (2015) Is there a role for assent or dissent in animal research? Camb Q Healthc Ethics 24(4):459–472. https://doi.org/10.1017/S0963180115000110
Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D (2019) Key challenges for delivering clinical impact with artificial intelligence. BMC Med 17(1):1–9
Kempt H, Heilinger J-C, Nagel SK (2022) “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts. AI Soc. https://doi.org/10.1007/s00146-022-01418-x
Keskinbora KH (2019) Medical ethics considerations on artificial intelligence. J Clin Neurosci 64:277–282. https://doi.org/10.1016/j.jocn.2019.03.001
Kim DW, Jang HY, Kim KW, Shin Y, Park SH (2019) Design characteristics of studies reporting the performance of artificial intelligence algorithms for diagnostic analysis of medical images: results from recently published papers. Korean J Radiol 20(3):405–410
Kimera SI, Mlangwa JE (2015) Veterinary ethics. In: Encyclopedia of global bioethics. Springer Cham, Switzerland
Kliegr T, Bahník Š, Fürnkranz J (2021) A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif Intell 295:103458. https://doi.org/10.1016/j.artint.2021.103458
Knesl O, Hart BL, Fine AH, Cooper L (2016) Opportunities for incorporating the human–animal bond in companion animal practice. J Am Vet Med Assoc 249(1):42–44. https://doi.org/10.2460/javma.249.1.42
Koytcheva MK, Sauerwein LK, Webb TL, Baumgarn SA, Skeels SA, Duncan CG (2021) A systematic review of environmental sustainability in veterinary practice. Top Companion Anim Med 44:100550. https://doi.org/10.1016/j.tcam.2021.100550
Legood G (2000) Veterinary ethics. Bloomsbury Publishing, London
Lenzen M, Malik A, Li M, Fry J, Weisz H, Pichler P-P et al (2020) The environmental footprint of health care: a global assessment. Lancet Planet Health 4(7):e271–e279. https://doi.org/10.1016/S2542-5196(20)30121-2
Li S, Wang Z, Visser LC, Wisner ER, Cheng H (2020) Pilot study: application of artificial intelligence for detecting left atrial enlargement on canine thoracic radiographs. Vet Radiol Ultrasound 61(6):611–618. https://doi.org/10.1111/vru.12901
Lustgarten JL, Zehnder A, Shipman W, Gancher E, Webb TL (2020) Veterinary informatics: forging the future between veterinary medicine, human medicine, and one health initiatives—a joint paper by the Association for Veterinary Informatics (AVI) and the CTSA One Health Alliance (COHA). JAMIA Open 3(2):306–317. https://doi.org/10.1093/jamiaopen/ooaa005
Magalhães-SantAna M, More SJ, Morton DB, Osborne M, Hanlon A (2015) What do European veterinary codes of conduct actually say and mean? A case study approach. Vet Record 176(25):654–654. https://doi.org/10.1136/vr.103005
McKenzie BA (2016) Overdiagnosis. J Am Vet Med Assoc 249(8):884–889. https://doi.org/10.2460/javma.249.8.884
Medical Board of Australia (2020) Good medical practice: a code of conduct for doctors in Australia. https://www.medicalboard.gov.au/codes-guidelines-policies/code-of-conduct.aspx. Accessed 14 July 2022
Mitchell M (2019) Artificial intelligence: a guide for thinking humans. Penguin UK, London
Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1(11):501–507. https://doi.org/10.1038/s42256-019-0114-4
Mittelstadt B, Floridi L (2016) The ethics of big data: current and foreseeable issues in biomedical contexts. Sci Eng Ethics 22(2):303–341. https://doi.org/10.1007/s11948-015-9652-2
Moses L (2018) Another experience in resolving veterinary ethical dilemmas: observations from a veterinarian performing ethics consultation. Am J Bioeth 18(2):67–69
Mullan S, Quain A (eds) (2017) Veterinary ethics: navigating tough cases. 5m Books Ltd, Great Easton
Myers NC (1997) Adrenal incidentalomas: diagnostic workup of the incidentally discovered adrenal mass. Vet Clin N Am Small Anim Pract 27(2):381–399. https://doi.org/10.1016/S0195-5616(97)50038-6
Nagamori Y, Sedlak RH, DeRosa A, Pullins A, Cree T, Loenser M et al (2021) Further evaluation and validation of the VETSCAN IMAGYST: in-clinic feline and canine fecal parasite detection system integrated with a deep learning algorithm. Parasit Vectors 14(1):89. https://doi.org/10.1186/s13071-021-04591-y
Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H et al (2020) Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. https://doi.org/10.1136/bmj.m689
Neethirajan S (2021) Ethics of digital animal farming. Preprints, 2021070368. https://doi.org/10.20944/preprints202107.0368.v1
Nejedly P, Kremen V, Sladky V, Nasseri M, Guragain H, Klimes P et al (2019) Deep-learning for seizure forecasting in canines with epilepsy. J Neural Eng 16(3):036031. https://doi.org/10.1088/1741-2552/ab172d
Newberry M (2017) Pets in danger: exploring the link between domestic violence and animal abuse. Aggress Violent Beh 34:273–281. https://doi.org/10.1016/j.avb.2016.11.007
Owe A, Baum SD (2021) Moral consideration of nonhumans in the ethics of artificial intelligence. AI Ethics. https://doi.org/10.1007/s43681-021-00065-0
Panesar S, Cagle Y, Chander D, Morey J, Fernandez-Miranda J, Kliot M (2019) Artificial intelligence and the future of surgical robotics. Ann Surg 270(2):223–226. https://doi.org/10.1097/SLA.0000000000003262
Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors 39(2):230–253. https://doi.org/10.1518/001872097778543886
Partridge D (1987) The scope and limitations of first generation expert systems. Futur Gener Comput Syst 3(1):1–10
Pohl R, Botscharow J, Böckelmann I, Thielmann B (2022) Stress and strain among veterinarians: a scoping review. Ir Vet J 75(1):15. https://doi.org/10.1186/s13620-022-00220-x
Prevett R (2019) Vet AI: a pioneering platform for pets. foundry4. https://foundry4.com/vet-ai-a-pioneering-platform-for-pets. Accessed 16 July 2022
Quain A, Ward MP, Mullan S (2021) Ethical challenges posed by advanced veterinary care in companion animal veterinary practice. Animals 11(11):3010. https://doi.org/10.3390/ani11113010
Quinn TP, Jacobs S, Senadeera M, Le V, Coghlan S (2021a) The three ghosts of medical AI: can the black-box present deliver? Artif Intell Med. https://doi.org/10.1016/j.artmed.2021.102158
Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V (2021b) Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inf Assoc 28(4):890–894
Quinn TP, Coghlan S (2021) Readying medical students for medical AI: the need to embed AI ethics education. arXiv.org. pp 1–10
Reagan KL, Reagan BA, Gilor C (2020) Machine learning algorithm as a diagnostic tool for hypoadrenocorticism in dogs. Domestic Anim Endocrinol 72:106396. https://doi.org/10.1016/j.domaniend.2019.106396
Rollin BE (2006) An introduction to veterinary medical ethics: theory and cases, 2nd edn. Blackwell Publishing, Oxford
Rosoff PM, Moga J, Keene B, Adin C, Fogle C, Ruderman R et al (2018) Resolving ethical dilemmas in a tertiary care veterinary specialty hospital: adaptation of the human clinical consultation committee model. Am J Bioeth 18(2):41–53
Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215
Russell SJ, Norvig P (2021) Artificial intelligence: a modern approach, 4th edn. Pearson, London
Sandøe P, Corr S, Palmer C (2015) Companion animal ethics. John Wiley & Sons, Oxford
Santoni de Sio F, Mecacci G (2021) Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos Technol 34(4):1057–1084. https://doi.org/10.1007/s13347-021-00450-x
Schwartz WB, Patil RS, Szolovits P (1987) Artificial intelligence in medicine where do we stand? Jurimetrics 27(4):362–369
Serpell J (1996) In the company of animals: a study of human–animal relationships. Cambridge University Press, Cambridge
Singer P (1995) Animal liberation. Random House, New York
Singer P, Tse YF (2022) AI ethics: the case for including animals. AI Ethics. https://doi.org/10.1007/s43681-022-00187-z
Sonal Sekhar M, Vyas N (2013) Defensive medicine: a bane to healthcare. Ann Med Health Sci Res 3(2):295
Souza GV, Hespanha ACV, Paz BF, Sá MAR, Carneiro RK, Guaita SAM et al (2021) Impact of the internet on veterinary surgery. Vet Anim Sci 11:100161. https://doi.org/10.1016/j.vas.2020.100161
Springer S, Sandøe P, Grimm H, Corr SA, Kristensen AT, Lund TB (2021) Managing conflicting ethical concerns in modern small animal practice—a comparative study of veterinarian’s decision ethics in Austria, Denmark and the UK. PLoS ONE 16(6):e0253420. https://doi.org/10.1371/journal.pone.0253420
Springer S, Lund TB, Grimm H, Kristensen AT, Corr SA, Sandøe P (2022) Comparing veterinarians’ attitudes to and the potential influence of pet health insurance in Austria, Denmark and the UK. Vet Record 190(10):e1266. https://doi.org/10.1002/vetr.1266
Steagall PV, Bustamante H, Johnson CB, Turner PV (2021) Pain management in farm animals: focus on cattle, sheep and pigs. Animals 11(6):1483. https://doi.org/10.3390/ani11061483
Sunstein CR (2003) The rights of animals. Univ Chicago Law Rev 70:387–401
Tannenbaum J (1991) Ethics and animal welfare: the inextricable connection. J Am Vet Med Assoc 198(8):1360–1376
Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25(1):44–56. https://doi.org/10.1038/s41591-018-0300-7
van der Linden D, Zamansky A, Hadar I, Craggs B, Rashid A (2019) Buddy’s wearable is not your buddy: privacy implications of pet wearables. IEEE Secur Privacy 17(3):28–39. https://doi.org/10.1109/MSEC.2018.2888783
Waljee AK, Higgins PDR (2010) Machine learning in medicine: a primer for physicians. Off J Am Coll Gastroenterol ACG 105(6):1224–1226. https://doi.org/10.1038/ajg.2010.173
Wilson A, Saeed H, Pringle C, Eleftheriou I, Bromiley PA, Brass A (2021) Artificial intelligence projects in healthcare: 10 practical tips for success in a clinical environment. BMJ Health Care Inf 28(1):e100323. https://doi.org/10.1136/bmjhci-2021-100323
WIRED Brand Lab (2022) Cloud to clinic: Zoetis’ vision for veterinary practices. Wired. https://www.wired.com/sponsored/story/cloud-to-clinic-zoetis-vision-for-veterinary-practices/. Accessed 3 May 2022
Wong ZSY, Zhou J, Zhang Q (2019) Artificial intelligence for infectious disease big data analytics. Inf Dis Health 24(1):44–48. https://doi.org/10.1016/j.idh.2018.10.002
Woolf SH, Harris R (2012) The harms of screening: new attention to an old concern. JAMA 307(6):565–566. https://doi.org/10.1001/jama.2012.100
WSAVA (2022) Global veterinary community. World Small Animal Veterinary Association. https://wsava.org/. Accessed 16 July 2022
Yeates JW, Main DC (2010) The ethics of influencing clients. J Am Vet Med Assoc 237(3):263–267. https://doi.org/10.2460/javma.237.3.263
Yeates J, Savulescu J (2017) Companion animal ethics: a special area of moral theory and practice? Ethic Theory Moral Pract 20(2):347–359
Zuraw A, Aeffner F (2021) Whole-slide imaging, tissue image analysis, and artificial intelligence in veterinary pathology: an updated introduction and review. Vet Pathol. https://doi.org/10.1177/03009858211040484
Acknowledgements
We thank an anonymous reviewer for their very helpful feedback.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions. There is no funding to declare for this paper.
Author information
Authors and Affiliations
Contributions
SC and TQ initially discussed the conceptual ideas. SC wrote the first draft, and TQ reviewed it and contributed to subsequent drafts. Both SC and TQ approved the final submission.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Ethics approval and consent to participate
N/A.
Consent for publication
Both authors fully consent to publication.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Thomas Quinn: Currently independent researcher
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Coghlan, S., Quinn, T. Ethics of using artificial intelligence (AI) in veterinary medicine. AI & Soc 39, 2337–2348 (2024). https://doi.org/10.1007/s00146-023-01686-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-023-01686-1