By Dr. Matthew Bennett copyright 2000
Informed consent as a guiding principle lies close to the heart of modern professional ethics. Although invoked as a basic directive in various professional fields, informed consent as an articulated ethical prescription evolved within the context of American medicine, and came as the result of a moral concern with the basic human right of self-determination. As such, the doctrine of informed consent is based upon some of the most fundamental assumptions of fundamental human rights. As a modern ethical formulation, informed consent was articulated partially in response to some of the more egregious human rights violations which occurred during the 20th century. Its history may be taken as an index of the development of modern attempts to codify and institutionalize basic human rights.
Concerns with informed consent were first registered within the American legal system with the landmark case Schloendorf v. Society of New York Hospital, in which Justice Benjamin Cardozo concluded “[E]very human being of adult years and sound mind has a right to determine what shall be done with his own body…” (1917, p. 92). This case resulted from a surgical procedure performed upon a patient who had previously refused the operation. Years later, the Nuremberg Code (1947) was composed in reaction to inhumane research practices committed by Axis scientists during World War II. Three basic tenets of the Nuremberg Code treat the issue of informed consent: (1) that voluntary consent is essential for human participants in research, (2) that the human subject must be free to discontinue participation if desired, and (3) that the Principal Investigator must be prepared to end the research procedures if there is probable cause to believe that continuation might result in injury, disability, or death of a human subject. Specifically the code requires that subjects of human research “should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the subject matter involved to enable him to make an understanding and enlightened decision” (xxiv). The Nuremberg Code was composed by an American military tribunal and its legal force has never been well established; however, the document went on to inform subsequent ethics formulations internationally. In the United States, the ethics of informed consent were debated primarily in the courts and in individual institutions review boards until the federal government began to codify them in 1974. The phrase “informed consent” itself was first used in Salgo v. Leland Stanford Jr. University Board of Trustees (1957).
An important shift in the legal application of informed consent occurred during the middle decades of the 20th century. Originally, the legal basis for enforcing informed consent requirements was battery: failure to conform to informed consent strictures amounted to unlawful touching of another individual. The Kansas Supreme Court applied negligence theory rather than battery theory in Natanson v. Kline (1960), as the plaintiff in the case had only alleged negligence. Additionally, this decision had an impact on informed consent theory in establishing that true informed consent required a "thorough-going self-determination" (p. 1104) rather than a “reasonable physician” standard that had previously been in use. Previously, under definitions defined in Schloendorf v. Society of New York Hospital, that the onus of soliciting relevant information fell to the patient. In a sense, only the consent element of today’s concept of informed consent was established. Under the guidelines established by this decision, it was left to the patient to spontaneously decide for or against a proposed medical procedure. A physician (or researcher) was negligent only when carrying on with a procedure against the expressed wishes of the patient; the physician carried no responsibility to communicate all the risks and benefits beforehand. Additionally, the established standard at this time was that the physician described potential risks and benefits according to professionally established standards; in other words, there was no requirement leading physicians to describe risks according to the individual needs of the individual patient. Canterbury V. Spence (1972) articulated a new standard, requiring disclosure designed for the communication needs of each patient or subject: "A risk is ... material when a reasonable person, in what the physician knows or should know to be the patient's position, would be likely to attach significance to the risk or cluster of risks in deciding whether or not to undergo the proposed therapy" (p. 786). The Wisconsin Supreme Court affirmed this principle in Scaria v. St. Paul Fire & Marine Insurance Co, in which the court stated that physicians could not rely upon “self-created custom of the profession” in establishing informed consent (p. 653). It is noted that in Moore v. Regents of the University of California (1990), the informed consent standard was broadened to include possible conflicts of interest surrounding the proposed therapy or procedure. Blackmon (1998) notes that “the existence of a motivation for a medical procedure unrelated to the patient's health is a potential conflict of interest and a fact material to the patient's decision” (p. 377). This point underlines the depth of contemporary informed consent requirements, up to and including the physician or researcher’s motivation for conducting the proposed procedure.
The National Research Act was signed into law on July 12, 1974, and represented the first attempt to codify human subjects ethics with the force of law. The Act served to create the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, whose deliberations resulted in the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research , 1975). This formulation specifies that “Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to chose what shall or shall not happen to them” (p. 2). The Belmont Report detailed three essential elements in acceptable conduct for research with human subjects: the elements are respect for persons (recognizing the dignity and autonomy of individuals, with special protection for those with diminished autonomy), beneficence (maximizing anticipated benefits resulting from research while minimizing possible harm), and justice (requiring the fair distribution of benefits and burdens associated with research). The Department of Health and Human Services utilized the criteria established in the Belmont Report in the establishment of rules for institutions regarding research with human subjects. The resulting set of federal regulations was codified as 45 CFR 46 (Protection of Human Subjects, 1983). In 1991 the existing set of Federal Regulations was adopted for universal application by all federal agencies, and all researchers sponsored by those federal agencies. These regulations were known as the Common Rule and are still in place in revised form.
The evolution of the concept of informed consent from the Nuremburg Code to the Belmont Report evidenced an important ideological shift: while the Nuremburg Code stressed individual responsibility for moral choices in human experimentation, the new United States federal regulations shifted responsibility for such choices to institutional agencies (Monagle, 1998). The Common Rule sets up detailed guidelines regarding the formation of Institutional Review Boards (IRB) which are to determine appropriate safeguards and procedures within the institution doing research with human subjects, including the development of informed consent documents. The Food and Drug Administration (1998) publishes information sheets explaining relevant federal regulations in the creation and conduct of IRBs.
The National Bioethics Advisory Commission (NBAC), created by Executive Order No. 12975 (1995) cited concern that the Common Law does not extend to all members of the general public, since its jurisdiction applies only to research conducted or sponsored by federal agencies. In general, agencies conducting research with human subjects in the United States without federal sponsorship or supervision establish their own IRBs which then generally follow guidelines established in the Common Rule. However, there are currently no legal requirements to guide researchers in such instances except where state and local laws might apply. Additionally, professional organizations such as the American Psychological Association, the American Psychiatric Association, and American Academy of Pediatrics have codified informed consent standards in their own code of ethics, enforceable indirectly by state licensure boards. A few states have enacted informed consent statutes, and an attempt to set an international standard has been undertaken in successive versions of the Helsinki Declaration (World Medical Association, 1997).
The American Psychological Association (APA) has articulated principles of informed consent for psychologists (1992). The APA guidelines contain general ethical principles which may relate to the doctrine of informed consent, such as for the protection of the rights, dignity, and welfare of individuals. They also contain specific references to informed consent in therapy, research, and filming or recording. The APA principles contain language similar to other formulations of bioethics in federal regulations and in the ethical strictures of other disciplines which conduct research or treatment of human beings. Like other ethical formulations, the APA code acknowledges some variation in the ability to provide consent and discourages the use of undue influence to participate in research or treatment (see Standard 6.14).
Beauchamp and Childress (1994) have noted an additional evolutionary change in emphasis among medical and research codes and institutional regulations: originally, informed consent requirements were primarily concerned with avoiding harm to subjects, including avoidance of unfairness and exploitation. More recently, informed consent requirements are more concerned with protection of autonomous choice, characterized by the authors as “a loosely defined goal that is often buried in vague discussions of protecting the welfare and rights of patients and research subjects” (p. 142). Beauchamp and Childress, in taking up the currently accepted standards focusing on autonomous choice, describe two senses of informed consent. The first is described as true autonomous choice, reflected in the expressed volition of the potential subject. The second is described as the social rules of consent, which reflect institutional standards and formulaic ways of addressing informed consent, such as written forms or stylized language. The authors assert that following the institutional and legal formulations for obtaining informed consent does not necessarily satisfy the doctrine of autonomous choice.
Lidz, Appelbaum, and Meisel (1988) similarly highlight the distinction between obtaining “consent” by means of a form or a single verbal exchange on one hand and “consent” as a continuous dialogue between subject and researcher on the other. These authors term the former method as the “event model” of informed consent. They assert that the event model “often becomes an empty ritual in which patients are presented with complex information that they cannot understand and that has little impact on their decision making” (p. 1385). Lidz et al therefore recommend the process model which “integrates informed consent into the physician-patient relationship as a facet of all stages of medical decision making” (p. 1386). The process model of informed consent carries the disadvantage that it is less amenable to standardized documentation and institutional review, which, as pointed out above, are increasingly emphasized as mainstays of informed consent policymaking. Additionally, the process model is difficult or impossible to implement in cases of brief or single-episode interventions, in which there is no continuous relationship between the subject and the researcher or doctor.
A similarly bipolar model of interpreting informed consent policy can be derived from the history of legal precedents and decisions. In Natanson Vs. Kline (1960), mentioned above, the court stated that “[T]he physician's choice of plausible courses should not be called into question if it appears, all circumstances considered, that the physician was motivated only by the patient's best therapeutic interests and he proceeded as competent medical men would have done in a similar situation” (p. 1106). This way of looking at execution of informed consent carries the legal implication that the ultimate standard by which sufficient consent is to be determined is through comparison with professional practice; this approach is termed the “reasonable physician” model and generally is understood as assuming that “the doctor knows best” in how much of what information to disclose to patients or subjects. The alternate approach, the “reasonable patient” model, articulated here in Caterbury V. Spence (1960), shifts the standard to what a reasonable patient would want to know about a given procedure or therapy: “The decision to unveil the patient's condition ... is ofttimes a non-medical judgment and, if so, is a decision outside the ambit of the special standard. Where that is the situation, professional custom hardly furnishes the legal criterion for measuring the physician's responsibility to reasonably inform his patient of the options and the hazards as to treatment” (p. 785). Engelhardt (1996) notes that American courts have been favoring the reasonable patient model (which Engelhardt calls “the objective standard,” p. 313) over the reasonable physician model. The evolution of the “reasonable patient” model has been associated with a corresponding shift in the courts from the use of expert witnesses to the use of juries of peers in deciding lawsuits involving informed consent.
Ethcells, Sharpe, Walsh, Williams, and Singer (1996) note that consent comprises three elements: disclosure, capacity and voluntariness. They describe “disclosure" as relevant information provided by the clinician in such a way as to be comprehended by the patient. "Capacity" describes the patient's ability to understand the disclosed information and its reasonably foreseeable consequences. Appelbaum and Grisso (1988) assert that capacity includes the ability to evidence a choice, to understand relevant information, to appreciate a situation and its consequences, and to manipulate information rationally. "Voluntariness" specifies that the decision must be given “freely, without force, coercion or manipulation” (Etchells et al, 1996, p. 178). Engelhardt (1996) describes “volunatariness” simply as freedom, and describes three senses in which freedom can be understood: as in being able to choose freely as a moral agent, being unrestrained by any commitment or authority which would otherwise preempt free choice, and being truly free from coercion. Etchels et al (1996) also distinguish between explicitly and implicitly given informed consent; the former is characterized by formal consent declarations such as in standardized forms, and the latter in the case of informed consent indicated by behavior (such as rolling up one’s sleeve to allow a venipuncture).
Agich (1997) described significant differences in the application of informed consent theory between clinical and research settings. In clinical practice, legal standards of informed consent are retrospectively applied; that is to say, a court may analyze the individual case of informed consent only following a legal complaint. In research settings, by comparison, policy requirements are reviewed for the duration of the research project by an IRB or similar authorizing agency. According to Agich, this difference of application reflects a fundamental difference in the relationship between the originator and the receiver of informed consent. In clinical settings, the physician or therapist assumes a caregiver role in which the patient’s health and safety are not only of paramount importance, but in fact represent the ultimate goal. In the case of research, the subject’s well-being, although held as critical in cases where human rights are respected, is of tangential importance relative to the goals of the research, which typically is of more immediate use to the researcher than to the subject. In such cases, ethical consideration of risks and benefits in research becomes of increasing importance, and principles of informed consent become the cornerstones of the ethical application of science.
A history of informed consent
Agich, G.J., (1998). Human Experimentation and Clinical Consent. In Monagle, J.F. Health care ethics: critical issues for the 21st century. Gaithersburg, MD: Aspen Publications.
American Psychological Association. (1992). Ethical principles of psychologists and code of conduct. American Psychologist, 47, 1597-1611.
Appelbaum, P.S., and Grisso, T. (1988). Assessing patients’ capacities to consent to treatment. New England Journal of Medicine, 319, 1635-1638.
Beauchamp, T.L., and Childress J,F. (1994) Principles of Biomedical Ethics, Fourth Edition. New York: Oxford University Press.
Blackmon, W. (1998). The emerging convergence of the doctrine of informed consent and the judicial reinterpretation of the employee retirement income security act. Journal of Legal Medicine. Journal of Legal Medicine, 19, 377-385.
Canterbury V. Spence, 464 F.2d 772 (D.C. Cir. 1972).
Engelhardt, H.T., (1996). The Foundations of Bioethics. New York: Oxford University Press.
Etchells, E., Sharpe, G., Walsh, P., Williams, J.R., and Singer, P.A. (1996). Bioethics for clinicians. Canadian Medical Association Journal, 155, 177-180.
Executive Order No. 12975, 60 Fed. Reg. 52063 (1995) .
Food and Drug Administration. (1998). Information Sheets: Guidance for Institutional Review Boards and Clinical Investigators. Rockville, MD: Office of the Associate Commissioner for Health Affairs.
Lidz, W., Appelbaum, P.S., & Meisel, A. (1988). Two models of implementing informed consent. Archives of Internal Medicine, 148, 1385-1389.
Monagle, J.F. (1998). Health care ethics: critical issues for the 21st century. Gaithersburg, MD: Aspen Publications.
Natanson V. Kline. 354 P.2d 670 (Kan. 1960).
Moore v. Regents of the University of California. 271 Cal. Rptr. 146 (Cal. 1990).
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1975) The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research . (DHEW Publication OS 78-0112). Washington, DC: US Government Printing Office.
Nuremberg Code (1947). In: Misterlich, A, Mielke, F. Doctors of infamy: the story of the Nazi medical crimes. New York: Schuman, 1949
Protection of Human Subjects, 45 C.F.R. S 46 (1983)
Salgo v. Leland Stanford Jr. University Board of Trustees, 154 Cal App2d 560, 317 P2d 170 (1957)
Scaria v. St. Paul Fire & Marine Insurance Co. 227 N.W.2d 647 (1975).
Schloendorf v. Society of New York Hospital, 211 NY 125, 129-130, N.E. (1914)
World Medical Association. (1997). Declaration of Helsinki: Recommendations Guiding Physicians in Biomedical Research Involving Human Subjects. Reprinted in Journal of the American Medical Association, 277, 925-926.