Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy
Abstract
Please see the Editorial Comment by Alexander J. Towbin discussing this article.
Proponents of artificial intelligence (AI) technology have suggested that in the near future, AI software may replace human radiologists. Although assimilation of AI into the specialty has occurred more slowly than predicted, developments in machine learning, deep learning, and neural networks suggest that technologic hurdles and costs will eventually be overcome. However, beyond these technologic hurdles, formidable legal hurdles threaten the impact of AI on the specialty. Legal liability for errors committed by AI will influence the ultimate role of AI within radiology and also influence whether AI remains a simple decision support tool or develops into an autonomous member of the health care team. Additional areas of uncertainty include the potential application of products liability law to AI and the approach taken by the U.S. FDA in potentially classifying autonomous AI as a medical device. The current ambiguity of the legal treatment of AI will profoundly influence development of autonomous AI given that vendors, radiologists, and hospitals will be unable to reliably assess their liability associated with implementing such tools. Advocates of AI in radiology and health care in general need to lobby for legislative action to better clarify the liability risks of AI in a way that does not deter technologic development.
HIGHLIGHTS
•
Although technologic hurdles impeding autonomous AI in radiology may ultimately be overcome, formidable legal hurdles also may influence the impact of AI on the specialty.
•
Tort law doctrines that may affect handling of AI in health care include medical malpractice, vicarious liability, and products liability.
•
Proponents should lobby for legislation rather than allow litigation to delineate AI liability; the legislature is better positioned to balance interests and protect innovation.
The challenge of automation and its potential to relegate human workers to obsolescence is not novel. From early folk tales such as the legend of John Henry the steel-driving man [1], to science fiction such as I, Robot [2] and The Terminator [3], authors and screen-writers have for decades predicted technology's supplanting the human workforce, often with undesirable consequences. Owing to advancements in deep learning and neural networks, health care has not been immune to such speculation. Radiology, with its emphasis on technology and, at its core, heavy reliance on sophisticated pattern recognition, has proved fertile ground for proponents of artificial intelligence (AI). It has been suggested [4] that an AI algorithm can learn to perform the pattern recognition functions of a radiologist faster, less expensively, and more accurately, always improving and never tiring.
This speculation has reached the public psyche, with radiology often being touted as a field inevitable for automation [5]. Concerns about the impact of AI have even adversely affected applications for radiology residencies [6]. Although predictions of the imminent demise of radiology have been curtailed in recent years, uncertainty remains with respect to the ramifications of AI in radiology.
Radiology practices are integrating AI into their practices, spanning from computer-aided detection (CAD) software tools to somewhat more advanced computer-aided diagnosis (CADx), with the expectation that ultimately even more complex deep learning algorithms will emerge. These initial forms of AI serve as tools to make radiologists more accurate and efficient but hardly obsolete. The allure of more sophisticated AI algorithms (so-called machine learning, deep learning, and neural networks) is the potential for the technology to go beyond predefined criteria, learn from datasets to which it has been exposed, and ultimately function autonomously as a diagnostician.
Duplicating radiologists' abilities through technology has proved more of a challenge than originally posited, and there is resultant skepticism regarding the ultimate impact of AI on the field, at least in the near term. Technologic hurdles and costs will decrease; it is only a matter of time until machines can offer a reasonable facsimile of the radiologist report. However, even beyond these technologic hurdles, formidable legal obstacles, often not given enough attention in the literature, threaten the impact of AI on the specialty and, if unchanged, have the potential to preclude the future success of this emerging industry.
This article explores legal considerations of AI in radiology from the perspective of the U.S. legal system, building on legal principles previously presented in the context of AI in nuclear medicine and molecular imaging [7]. Because laws are inherently regional and derive from actions of the legislature, courts, and administrative bodies within each country or jurisdiction, the legal handling of AI differs around the world.
Mistakes Are Inevitable
Perfection is an unattainable target, and the truism that mistakes are inevitable is especially apt in diagnostic imaging [8]. AI has the lure of enabling hospitals to handle significantly increased quantities of patient imaging—an AI algorithm never tires, never sleeps, and works dramatically faster than a human being. But with increased volumes, the net number of mistakes will surely increase, even if the error rate decreases. Even if AI has 99% accuracy (an unsustainable metric for human radiologists), it is still making 1 mistake in 100 attempts, which may not be a small number when AI is reading hundreds more studies per day all day every day of the week. Additionally, even if a perfect AI model is attainable, performance may degrade over time owing to physiologic changes in the data used for inference compared with the original training (e.g., data drift) [9]. Eventually, an AI algorithm will make a mistake that causes serious injury, and the legal system will aim to make that injured patient whole through civil litigation or settlement. Tort law is the branch of civil liability in which a person who is damaged by the mistake of another may seek to obtain restitution. Types of tort law that may be implicated with AI in health care include medical malpractice, vicarious liability, and products liability [10]. Which of these doctrines is ultimately determined to govern liability may meaningfully influence the advancement of AI and its potential.
The Law Favors Artificial Intelligence as a Radiologist's Tool Rather Than as a Radiologist's Colleague
Fundamentally, the legal handling of AI will hinge on the degree of autonomy exercised by the AI software. If the primary use of AI is simply decision support to highlight findings for the radiologist, who thereafter makes the final determinations and issues a report, the issues are simple. The radiologist who makes the final determination bears the liability risk. This is essentially how CAD software, used to highlight regions of concern initially in mammography and increasingly in other aspects of imaging, is used. CAD software serves to make a radiologist more accurate, and possibly faster. But essentially it plays only an ancillary role as a second set of eyes, rather than actually interpreting imaging or making diagnostic decisions. This form of AI can be seamlessly incorporated into radiology practice, and the limited tort law implications encourage keeping AI in this role. The safest approach, from a liability perspective, is to limit AI to being a confirmatory tool to support a radiologist's decision-making processes rather than to use it as a primary decision-making actor [11]. Medical malpractice law covering physicians is well established and predictable; health care providers, insurers, patients (and their lawyers), and the courts are cognizant of the expectations and risks. Laws related to AI autonomy are not yet established; this uncertainty creates an inherent bias toward limiting the role of AI to that of a tool and holding the human user—the radiologist—primarily responsible.
Even this limited use of AI is not without costs, however, as the use of AI to highlight regions of concern on images may create extra work for physicians to explain their reasoning when they disregard AI findings and may create additional liability should an ignored finding ultimately be confirmed correct. A physician who disregards a CAD finding that proves to be significant may thereafter need to justify this omission in front of a jury, and the CAD findings would become the built-in expert witness against the radiologist [12].
Artificial Intelligence as an Independent Assistant
As AI algorithms become more complex, AI may begin to serve as an assistant rather than a tool, occasionally acting independently though under periodic regular supervision of a trained radiologist. One can envision a situation in which AI algorithms become leveraged in this way during off hours or in high-volume settings. Essentially, the AI algorithms would be acting as the radiologist's agent or subordinate. For human agents and assistants, this could bring into play the legal doctrine of vicarious liability (respondeat superior, or “let the master answer”), a form of imputed negligence whereby the negligence of an assistant is attributed to the supervisor, notwithstanding that the supervisor may not be at personal fault [13]. For example, if a radiolo-gist asks a technologist to image a patient and that technologist negligently injures the patient in the process, the radiologist as the supervisor may be at risk of imputed liability for that injury, notwithstanding that this radiologist was in a reading room elsewhere and did not personally interact with that patient.
So, too, an AI algorithm acting autonomously and interpreting imaging could in theory be viewed as functionally comparable to an employee of that facility, with the outcome that its negligence could be attributed to its supervising radiologist or employer. As algorithms become more complex and independent, this may become the courts' preferred approach, reflecting the supervising radiologist having essentially fictitious control over the AI, setting the AI in motion and reaping the benefits. This approach allows courts to designate a human being as the one ultimately responsible and, from a public policy perspective, allows the courts to make the injured patient whole while appropriately ascribing liability to a party with the greatest ability to absorb the costs. The algorithm would generate liability for a human defendant who bears the risk, avoiding the murky scenario of patients suing algorithms directly.
Artificial Intelligence as the Treating Radiologist: Medical Malpractice Implications
It has been suggested that AI will not only assist but also eventually replace radiologists (or generate characterization or prediction outputs not readily verifiable by a human supervisor). From a legal standpoint, this creates a significant degree of complexity. When a patient is injured by a radiology error, the patient may seek damages by filing a civil lawsuit against that radiolo-gist for medical malpractice. To prevail as the plaintiff, this patient must prove four key elements against the defendant-radiologist: that the radiologist owed the patient a legal duty of care; that there was a breach of this duty; that there was resultant injury or damages; and that the radiologist's breach was the proximate cause of the injury or damages [14]. For there to be a duty, a physician-patient relationship must be shown [14]. For a radiologist, this occurs when the patient arrives for an imaging study, whether or not the radiologist has any direct interaction with the patient [14]. Once a physician-patient relationship is established, the radiologist owes a legal duty to practice within the standard of care for a radiologist in that setting.
A radiologist breaches this legal duty when the expected standard of care is not met. The standard of care is the degree of care that a reasonably prudent radiologist would be expected to exercise under the same or similar circumstances [14]. The issue of liability is one of reasonableness: what would a reasonably prudent radiologist do in this situation? This standard of care will largely be established in the context of the courtroom with expert witness testimony, whereby other radiologists opine as to what, in their professional opinion, would be a reasonable action in this situation [14]. The standard of care may also be based on clinical practice guidelines, government regulations, or an institution's policies and procedures, which may be introduced directly in court as a learned treatise or used by the expert witness as a basis for the expert's opinion [14].
To prove causation, the patient must show that the radiolo-gist's negligence was the actual cause of the damages—that had the radiologist acted differently, the harm would not have occurred [14]. For example, a patient whose radiologist missed a malignancy may be able to show causation if that malignancy caused a detriment but may not be able to show causation if the patient's demise was thereafter caused by an unrelated traffic accident.
Finally, the patient needs to prove damages—that the patient was actually harmed by the error [14]. The injured patient bears the burden of proof and must prove each element by a preponderance of the evidence (i.e., more probably than not, or a greater than 50% threshold) [15].
But how does medical malpractice even work in the setting of an autonomous algorithm? Is there a similar physician-patient relationship when the “physician” is an algorithm? How is an AI algorithm held to the reasonably prudent radiologist (or perhaps “reasonably prudent algorithm”) standard, and who could serve as expert witness to determine this standard? Is there a different standard of care or expectation for an algorithm, and does the expectation change if the algorithm is performing tasks that go beyond the capabilities of the typical human radiologist (e.g., predicting optimum therapy options or responses based on imaging or genomic lesion characterization)? Ultimately, the facility hosting the AI likely would bear liability, and malpractice principles would no longer be applicable or even defensible; the circumstance would essentially become a form of enterprise liability [16].
A novel solution to the AI liability uncertainty has been proposed [10, 17]: confer “personhood” on the AI algorithm, and let it be sued directly, just like a radiologist. Chung et al. [17] noted that contemporary iterations of AI, including IBM's Watson, are functionally analogous to a medical student. They have a visible role in patient care, some degree of independent patient interaction, and lack of constant supervision but do not have the final decision-making authority of an attending physician and as such should carry similar insurance and incur liability similar to that of a medical student. Although conferring personhood on AI may have unanticipated ramifications, the law has created similar legal fictions in other situations; for example, corporations have been conferred a number of obligations of personhood under U.S. tax laws [18]. Larger hurdles are erected when the AI algorithm is used in increasing roles beyond that expected of a medical student, as technologic advancement is unlikely to stop at such a stage.
An injured patient tends to be a sympathetic witness in the eyes of a jury, whereas an AI algorithm would be unsympathetic: faceless emotionless robots make for bad defendants. A skilled plaintiff's attorney would elicit a mental image of machines running amok, including cold passionless robots making life-and-death judgments. Jurors, inclined to fear technology from a lifetime of science fiction dystopias, would likely “throw the book” at the defendant. The idea that a medical center would replace a caring and compassionate physician with a robot such as HAL 9000 [19] to maximize revenue would not play well to a jury.
Artificial Intelligence and Products Liability
Products liability is the field of tort law governing liability of those manufacturing and selling products and goods to customers [13]. Products liability law is implicated when a person is injured by a medical product that is not reasonably safe because of a defect in its manufacture, design, or warning and labeling [13]. Although this area of tort law, on its surface, would seem to apply to AI in radiology, putting risks of liability squarely on the AI software developers and vendors rather than their radiologist customers, several obstacles exist. First, AI is typically a software program rather than a tangible product, and the law has been reluctant to apply products liability law to software [20]. This perspective may differ when the software is permanently integrated into a tangible product, as in the situation of autonomous vehicles (often called self-driving cars); thus, this distinction could diminish for AI software that PACS vendors incorporate into their underlying systems. At present, however, software developers are creating and marketing algorithms primarily as stand-alone installable products. Furthermore, as the aim of machine learning is for an algorithm to learn and improve from experience and exposure to datasets over time, the originally purchased algorithm may not function identically to the one that ultimately causes the harm months or years later. To hold a developer liable for injury caused by an algorithm that has evolved over time at a facility outside of the developer's supervision and control may not be just.
An important exception to products liability is the learned intermediary exception [20]. This exception provides that to the extent that a radiologist, a learned intermediary, has a chance to review an imaging report and catch errors before the report is released and the patient injured, that radiologist would be responsible for the brunt of the liability; a radiologist or imaging center would not be able to serve as a passive messenger of negligent AI findings [20]. A radiologist's intervening failure to catch the error would effectively absolve the AI developer's downstream negligence. But an expectation for radiologists to regularly audit AI interpretations limits the utility of AI and quells any vision of fully autonomous radiology AI software.
Additional nuances of the products liability approach may arise if AI software is developed by an external vendor but then customized or trained in house by a radiology department. In this situation, concepts of enterprise liability would apply, and the customizing facility may also bear liability [16]. Finally, AI software is often developed by small startup companies, in contrast to large hospitals and insurers, which have deep pockets, such that both the small and large entities will likely be named in any lawsuit. For policy reasons, courts may apportion liability in a way that would make the plaintiff whole, tending to place the burden on the larger hospital. This approach may not apply when a larger corporation, such as an algorithm aggregator, holds ownership interest.
The FDA and Artificial Intelligence Algorithms as Medical Devices
An additional area of uncertainty involves the approach that the FDA will take in treating autonomous AI-containing products, and the algorithms themselves, as medical devices. The doctrine of preemption may apply in this scenario. This doctrine provides that between federal and state laws governing a product, the federal laws take precedence over those of the state [16]. Once the FDA has cleared a medical product for use in certain applications, if a hospital or radiologist (with the patient's assent) appropriately uses the product as such, then state malpractice law would be preempted, and the use would not be deemed negligent, notwithstanding the outcome [16, 21]. However, courts have inconsistently applied preemption [16]. Furthermore, machine learning algorithms may not be static and as such are unlikely to be given blanket clearance by the FDA for all future iterations. Jorstad [22] suggested that to obtain FDA approval for an algorithm, developers may need to use locking, whereby once an algorithm has been sufficiently trained, its capacity to further evolve is frozen (thereby presumably placing the product in the CADx category); however, this approach would limit the ultimate benefit of AI and create a hard ceiling for its evolution.
The FDA currently categorizes medical devices into one of three classes based on risk. Class I comprises medical devices with minimal potential for harm; class II comprises intermediate-risk devices; and class III comprises high-risk devices important to health or sustaining life [23]. Under current regulations, both CAD and CADx are categorized as class II devices [24, 25]. It remains unknown how the FDA will address and categorize autonomous continuous learning algorithms, if at all. The FDA [26] indicates that it may require a predetermined change control plan incorporating greater transparency in premarket submissions and real-world monitoring and updates in combination with future FDA oversight. At a virtual public workshop in 2021, the discussion on FDA regulation appeared to emphasize labeling and package inserts for AI-enabled medical devices, suggesting a focus on devices that incorporate AI rather than a focus on stand-alone algorithms [27]. In comments to the FDA, the American College of Radiology advocated for improved transparency, access to performance and demographic data from developers, and a redefining of adverse events, noting that the current narrow focus underreports AI errors corrected by radiologists and falsely boosts AI performance statistics [28].
Future Hurdles and Open Questions
The legal standing of AI remains cloudy, and legal complexities increase with an increase in autonomy. Although the function and risks of autonomous vehicles are not entirely analogous to those of radiologic AI, a number of vehicular accident lawsuits involving autonomous vehicles have been filed [29, 30], and court treatment of these cases may be instructive. Considering that the software is integrated into a tangible product (i.e., the car), will these be handled as cases of driver negligence or of products liability? Some states have already issued legislation related to autonomous vehicles and have imposed statutory liability on the vehicle's human operators, with the goal of offering greater legal predictability while not stunting technologic development [31]. But even in the absence of explicit statutory obligation, tort principles in this situation focus on the extent of control—whether the vehicle operator reasonably could have avoided the accident. If a jury would be troubled by a driver sleeping while an autonomous vehicle crossed a school area unsupervised, would their concerns be different if a radiologist were to be asleep while an AI algorithm misdiagnosed a tumor?
Debate is ongoing regarding the appropriate integration of AI tools with human decision makers (including nonradiologists), the risks of ignoring AI outputs as AI use becomes the standard of care, and potential issues in overreliance on AI tools that may be relevant to liability [32]. AI law remains in its early stages, and ongoing uncertainty exists regarding the manner in which courts will allocate liability for AI mistakes in radiology and the impact that such costs may have on AI development. Proponents of AI should recognize the complexities and hurdles of the legal system.
Navigating Legal Challenges While Supporting Further Development of Artificial Intelligence
What can be done to ensure that the potential of AI is not thwarted in its infancy by the underlying legal issues? To the extent possible, proponents should lobby for legislative action rather than allow the courts to decide AI issues. Letting the courts delineate AI liability could have catastrophic consequences for this fledgling industry given that a court's priority in making an injured plaintiff whole while apportioning liability to those best able to bear the costs is generally not the approach that fosters robust technologic advancement. The legislature may be better positioned to balance interests and enact laws to protect the public in a way that does not hinder technologic development. Many states have begun to legislate in the area of autonomous vehicles, and similar regulation of AI in health care might be beneficial.
An approach similar to that of the National Vaccine Injury Compensation Program (NVICP) may also be beneficial in this setting. In the 1980s, vaccine manufacturers, concerned about the impact of litigation, threatened to withdraw from the industry in favor of lower-liability endeavors [33]. The U.S. legislature, to ensure a steady supply of childhood vaccines, enacted the no-fault NVICP whereby claims that meet certain specifications are handled in an expedited manner by special masters working under the U.S. Court of Federal Claims, who administer awards from a fund generated by a tax on vaccines [33]. This approach was found to be both protective to the vaccine industry and positively received by many plaintiffs, who generally received awards rapidly and with fewer evidentiary hurdles [33]. Robust development of AI in health care may be sufficiently important to merit similar protective legislation, and AI products could be similarly taxed to create a fund from which awards could be administered.
AI is undergoing rapid integration into radiology practice, driven by the appeal of improvements in diagnostic accuracy, cost-effectiveness, and savings. Although the legal implications of simple applications of AI as a radiology tool are straightforward overall, the legal ramifications of greater AI autonomy are thus far incompletely delineated. Current technologic hurdles impeding integration of advanced AI solutions into radiology practice will gradually be overcome. However, the accompanying legal hurdles and complexities are substantial and, depending on how they are handled, could lead to untapped technologic potential.
References
1.
U.S. National Park Service website. The legend of John Henry: Talcott, WV. www.nps.gov/neri/planyourvisit/the-legend-of-john-henry-talcott-wv.htm. Updated January 22, 2020. Accessed November 16, 2021
2.
Asimov I. I, Robot. Fawcett Publications, 1950
3.
Cameron J, Hurd GA. The terminator. Orion Pictures, 1984
4.
Walter M. If you think AI will never replace radiologists—you may want to think again. Radiol Bus www.radiologybusiness.com. Published May 14, 2018. Accessed October 16, 2021
5.
Sims SD. Presidential candidate Andrew Yang: on the impact of automation on the future. stevedsims.medium.com. Published February 28, 2019. Accessed October 16, 2021
6.
Reeder K, Lee H. Impact of artificial intelligence on US medical students' choice of radiology. Clin Imaging 2022; 81:67–71
7.
Mezrich JL. Demystifying medico-legal challenges of artificial intelligence applications in molecular imaging and therapy. PET Clin 2022; 17:41–49
8.
Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. RadioGraphics 2018; 38:1845–1865
9.
Duckworth C, Chmiel FP, Burns DK, et al. Using explainable machine learning to characterise data drift and detect emergent health risks for emergency department admissions during COVID-19. Sci Rep 2021; 11:23017
10.
Sullivan HR, Schweikart SJ. Are current tort liability doctrines adequate for addressing injury caused by AI? AMA J Ethics 2019; 21:E160–E166
11.
Price WN 2nd, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA 2019; 322:1765–1766
12.
Mezrich JL, Siegel EL. Legal ramifications of computer-aided detection in mammography. J Am Coll Radiol 2015; 12:572–574
13.
Keeton WP, ed. Prosser and Keeton on the law of torts, 5th ed. West Publishing, 1984
14.
Eisenberg RL. Radiology and the law: malpractice and other issues. Springer-Verlag, 2004
15.
Mezrich JL. Hiding in the hedges: tips to minimize your malpractice risks as a radiologist. AJR 2019; 213:1037–1041
16.
Jha S. Can you sue an algorithm for malpractice? It depends. STAT website. www.statnews.com/2020/03/09/can-you-sue-artificial-intelligence-algorithm-for-malpractice/. Published March 9, 2020. Accessed December 3, 2021
17.
Chung J, Zink A., Hey Watson, can I sue you for malpractice? Examining the liability of artificial intelligence in medicine. Asia Pac J Health L Ethics 2018; 11:51–80
18.
Internal Revenue Service website. Classification of taxpayers for U.S. tax purposes. www.irs.gov/individuals/international-taxpayers/classification-of-taxpayers-for-us-tax-purposes. Accessed December 3, 2021
19.
Clarke AC. 2001: a space odyssey, Metro Goldwin Mayer and Stanley Kubrick Productions, 1968
20.
Harned Z, Lungren MP, Rajpurkar P. Machine vision, medical AI, and malpractice. Harv J Law Technol Dig jolt.law.harvard.edu/digest/machine-vision-medical-ai-and-malpractice. Published March 15, 2019. Accessed December 4, 2021
21.
Riegel v Medtronic, Inc, 552 US 312 (2008)
22.
Jorstad KT. Intersection of artificial intelligence and medicine: tort liability in the technological age. J Med Artif Intellig 2020; 3: 1–28
23.
U.S. FDA website. Overview of medical device classification and reclassification. www.fda.gov/about-fda/cdrh-transparency/overview-medical-device-classification-and-reclassification. Published December 19, 2017. Accessed December 4, 2021
24.
21 CFR §892.2060(b) (2020)
25.
21 CFR §892.2070(b) (2020)
26.
U.S. FDA website. Artificial intelligence and machine learning in software as a medical device. www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device. Updated September 22, 2021. Accessed December 4, 2021
27.
Schneider ME. Industry, clinician groups have different wish lists for AI/ML-enabled device labels. Regulatory Affairs Professional Society website. www.raps.org/news-and-articles/news-articles/2021/11/industry-clinician-groups-have-different-wish-list. Published November 18, 2021. Accessed December 4, 2021
28.
O'Connor M. ACR wants the FDA to enhance artificial intelligence device transparency: 4 suggestions. HealthImaging website. www.healthimaging.com/topics/ai-emerging-technologies/acr/acr-ai-device-transparency-fda. Published November 23, 2021. Accessed December 4, 2021
29.
Shepardson D. GM settles lawsuit with motorcyclist hit by self driving car. Reuters website. www.reuters.com/article/idUSL2N1T31YT. Published June 18, 2018. Accessed December 4, 2021
30.
Siddiqui F. Uber reaches settlement with family of victim killed after being struck by one of its self driving vehicles Washington Post. www.washing-tonpost.com. Published March 29, 2018. Accessed December 4, 2021
31.
Silverman C, Goldberg P, Wilson J, Goggins S. Torts of the future: autonomous vehicles: addressing the liability and regulatory implications of emerging technologies. U.S. Chamber of Commerce, Institute for Legal Reform website. www.ali.org/media/filer_public/6a/26/6a26ebc5-3dfa-4c60-b1ba-7e596819ef43/dc-656837-v1-torts_of_the_future_autonomous_emailable.pdf. Published May 2018. Accessed December 4, 2021
32.
Gaube S, Suresh H, Raue M, et al. Do as AI say: susceptibility in deployment of clinical decision-aids. NPJw Digit Med 2021; 4:31
33.
Mezrich J. Proving a claim under the National Vaccine Injury Compensation Program. In: American jurisprudence, 3rd series, vol. 23. Thomson Reuters, 1993
Information & Authors
Information
Published In
Copyright
© American Roentgen Ray Society.
History
Submitted: December 11, 2021
Revision requested: December 29, 2021
Revision received: January 20, 2022
Accepted: February 1, 2022
First published: February 9, 2022
Keywords
Authors
Metrics & Citations
Metrics
Citations
Export Citations
To download the citation to this article, select your reference manager software.