IRBs often set an arbitrary grade-level requirement?such as sixth or eighth grade?at which consent forms are supposed to be written...
IRBs often set an arbitrary grade-level requirement-such as sixth or eighth grade-at which consent forms are supposed to be written. A recent review1 of 114 Web sites of U.S. medical schools found readability standards between fifth and tenth grade, although actual grade levels of their consent form templates averaged almost three grades higher than their recommended grade levels.
The assumption behind such recommendations is that subjects who can't understand a consent form at a third-year college level will understand one written at an eighth-grade reading level. Sometimes this recommendation comes from data showing that average Americans read at an eighth-grade level; sometimes it comes from intuitive beliefs that anything written at a lower grade level simply must be easier to understand. But this recommendation is flawed for at least two reasons. The first is that writing at an eighth-grade level is very hard; the Flesch Reading Ease Score2 describes materials at sixth-eighth-grade reading level as having 14 to 17 words per sentence and 139 to 147 syllables per 100 words. This translates into many one- or two-syllable words. Second, how should writers measure a consent form's grade level? Many IRB Web sites recommend readability software, often suggesting the Flesch-Kincaid formula in Microsoft Word. But Microsoft's version of that formula is flawed; any score at grade 12 or above is reported as grade 12, plus its reported grade level is affected by the document's format.4 Although the Flesch-Kincaid scores up to grade 17, Microsoft's version is too unreliable and inaccurate to recommend or use since researchers can not reliably verify the grade level of a consent form.
Few studies have been done on the impact of rewritten materials on reader comprehension. Table 1 summarizes nine studies comparing comprehension of higher grade level documents that were rewritten to a lower grade level. The nine studies include informed consent as well as other rewritten materials spanning over 20 years.
Table 1. Summary of reading comprehension and reading grade level
One study in progress13 compares an investigator-developed consent form with one developed by focus groups for a U.S. Veterans Administration study on Gulf War Illnesses. Although the focus group made seven significant changes, such changes (see Table 2) were not reflected in an independent statistical readability analysis (using Prose: The Readability Analyst, Grammatik 6.0, and WStyle) of the two versions.
Although the two versions were almost statistically identical, changes recommended by the Focus Group may still produce differences in understanding. And this may depend on whether the researchers' questionnaire is sensitive enough to detect such differences.
Subject education.
Most study populations are college-educated people who should have better reading and comprehension skills than those without a college education. This is because they have more years of formal education, which includes better-developed abstract thinking skills, larger vocabularies, and more experience reading complicated text. For this reason, some consent form comprehension studies may show better understanding simply because these subjects are better educated. What are needed are comprehension studies that include a broader range of subjects, whose educational attainment matches U.S. census data. College-educated subjects may not find easy-to-read consent forms more comprehensible, but subjects with high school or junior high school educations might.
Measuring comprehension. Psychological principles require comprehension measures to be both valid (i.e., they measure what they are supposed to measure) and reliable (i.e., repeated testing produces similar scores). "Face validity," in which an instrument only looks like it measures what is it supposed to measure, has no scientific value.
Of the nine studies, just two7,10 addressed the content validity of their comprehension measures, but did so in very vague terms. They stated only that "Consent validity for the measure was high, as evidenced by the judgments of a panel of experts who reviewed the questionnaire"7 or that three experts reporting, "Agreement was reached in each of these areas,"10 but neither discussed the statistical specifics of those assessments. Plus, these two papers were the only ones that addressed reliability of their comprehension measures. Without any document validity or reliability data for their comprehension measures, the others studies are scientifically questionable, since they are based only on "face validity."
Table 2. Readability of investigator-developed vs. focus group-developed consent form
Different researchers use different methods for measuring comprehension, so there's no way to compare findings from different studies. Researchers have used true-false questions,11 multiple choice questions,9,10 or have asked subjects to paraphrase documents and answer questions.4,8
Both true-false and multiple-choice questions are flawed because subjects can get a percentage of answers correct by guessing. Because a true-false test is simply a multiple-choice test with only two possible answers, subjects can get 50% correct by guessing. For multiple-choice questions, subjects can get 20% correct with five possible answers, 25% correct with four, and 33% correct with three. Such methods for measuring subject comprehension may not be sufficiently sensitive to detect true understanding.
Since multiple-choice questions include the correct answer among the incorrect alternatives, they measure consent form recognition, not recall. Consent researchers seem unaware of this important distinction. As a result, comprehension scores may be higher on a multiple-choice test than they would be if subjects were asked to tell researchers what they remembered and understood about the consent process. While these issues are not discussed in the informed consent research, they are well known in the psychological testing literature (which is similar to informed consent testing process), and can be found in undergraduate textbooks such as Anne Anastasi's renowned Psychological Testing.
Psychological measurement demands that comprehension measures be both valid and reliable. Of the nine studies, only two addressed the content validity of their comprehension measures, but did so in very vague terms.7, 10 Coyne et al. stated only that "Content validity for the measure was high, as evidenced by the judgments of a panel of experts who reviewed the questionnaire,"7 (p. 837) but did not include specifics of those judgments. Cardinal, who used three experts, reported that "Agreement was reached in each of these areas" (p. 296), but did not discuss specifics of that agreement.10 Coyne and Cardinal were the only researchers to address reliability of their comprehension measures. Without validity or reliability for their comprehension measures, such studies become scientifically meaningless.
Minimally acceptable understanding? One study using multiple-choice questions9 found a statistically significant difference in understanding between a Low Reading Level consent form (grade 6) and a High Reading Level consent form (grade 16). But the actual difference in comprehension was a difference of only one more answer correct! Subjects with less than a high school education answered 12.88 questions correctly; those who attended college correctly answered 13.95; those who graduated college correctly answered 14.31. Based on 21 multiple-choice questions (the number of alternatives was not given), subjects correctly answered 61% to 68% of the questions. Does that demonstrate "comprehension?" In an academic setting, that would be a "D" grade.
In another study that measured legal contract comprehension,8 which can be compared to an average consent form, subjects correctly answered 65% of the questions about the plain-language version versus 51% of the legal contract version. This 14% difference in comprehension was statistically significant, but does 65% correct demonstrate "comprehension?" The authors suggest because legal documents are complex, or don't fit with people's understanding of the law (or what they've seen on television), plain-language legal contracts may not lead to major changes in comprehension. The same might be said of consent forms.
A study asking subjects to paraphrase jury instructions4 found about 40% to 54% correct responses, concluding that conceptual difficulty accounted for much variation in comprehension scores. Sentence length had no effect on subject comprehension, but educational attainment did, with the best comprehension demonstrated by those subjects with the most years of education. A second study of rewritten jury instructions without problematic construction problems found 43% correct answers with modified instructions versus 32% with the original. While this 11% difference is statistically significant, the question still remains: Does 43% correct really demonstrate understanding?
A vaccine information pamphlet study5 found that subjects better understood (by 15%) a university-designed pamphlet at a sixth-grade reading level compared to the CDC's tenth-grade version. Comprehension was measured with nine questions; subjects correctly answered 72% (six and a half of nine questions) on the university pamphlet and 56% (five of nine questions) on the CDC version. Is a statistically significant difference of 1.5 answers on nine questions meaningful?
Plus, the authors reported that the lengthy (18,117 words) CDC version took subjects about 14 minutes to read-that equals a 1,294 words-per-minute reading speed. The university pamphlet was 332 words long. Subjects took an average of about 4-1/3 minutes to read it, which is equal to an average reading speed of 74 words per minute. At that reading speed, it would have taken subjects 4 hours to read the very long CDC pamphlet. Subjects could not have read the entire CDC form in 14 minutes, suggesting that poorer comprehension on the CDC form might be due to the fact that they didn't read the entire pamphlet.
Table 3. Readability factors in informed consent
A study of information about an HIV vaccine trial6 found that on a 20-item multiple-choice comprehension test (with three alternatives), subjects correctly answered 83% (about 17) of the questions on a simplified version, but 70% (about 14) on the standard version. Subjects could have gotten six to seven answers correct (20% to 35%) by guessing. But the standard version, developed by the National Institute of Allergy and Infectious Diseases, was 22 pages long (4,059 words). The simplified version was about 2,521 words. This study differed from the others insofar as the subjects were read the vaccine materials out loud and could follow along if they wanted. Results may be confounded because some subjects may have only had the materials read to them; others may have heard it and read along. Two inputs might be better than one.
Only one study10 suggested a minimum level of understanding, assuming that scores of 70% or above on the cognitive test demonstrated minimum comprehension. Using that 70% cutoff, none of the above studies demonstrated even minimal understanding for plain-language consent forms or legal documents.
The answer to that question depends on how "comprehension" is measured. Most of the studies involved having subjects answer true-false or multiple-choice questions, a task which some subjects might find difficult if they have reading problems. If they can't read and understand the questions, how well can they answer them? There might be differences in comprehension depending on whether subjects are given a paper-and-pencil comprehension "test" or whether the questions are asked verbally, so the subjects can give an oral response that does not involve reading test questions. Without comprehension criteria that are decided upon before the study is done, there's no benchmark for low comprehension, average comprehension, or above average comprehension.
Consent form length. Some studies used short consent forms, perhaps only 300 words. But clinical trial consent forms typically range from about 1,000 to 3,000 words-averaging about 2,000 words14-before new required HIPAA elements are added. With HIPAA, future consent forms may range from 2,500 to 4,000 words long or longer (a recent consent form that came to our IRB was 5,260 words long). As a result, that reading level may be offset by problems with information overload.
The U.S. Department of Health and Human Services
15
has developed model language for researchers to use when explaining how they share protected health information. This document notes that "The authorization must be written in plain language." But this authorization is not in plain language (see sidebar).
Table 3 compares five basic readability factors for the HHS "Optional Elements," as well as 20 HIPAA notices (averaging 900 words) from consent forms submitted to North Memorial Health Care's IRB and from online HIPAA notice templates and examples.
HHS Guidance on Info Sharing
Since most sponsors want to be HIPAA compliant, they will probably use the HHS sample language, which means that subjects will be given even more written information they can't easily understand. In their audits, FDA will review HIPAA privacy notices if they are embedded in the consent form (both requiring a single subject signature); they will not review HIPAA privacy notices if they are an addition to the consent form (requiring two subject signatures).
Information overload. FDA requires eight basic elements and six "when appropriate" elements for informed consent, and HIPAA regulations may add five to 12 more to that sum. Thus, current clinical trial consent forms may have up to 26 elements that have to be explained-more topics than typical subjects can read, remember, and understand.
But readability research doesn't address all FDA-required elements of informed consent, often asking only 10 or 20 questions about the research project. Thus, there's no way to identify which FDA-required elements are understood completely, partially, or not at all. So much for the statement "I have read and understand this statement of informed consent...," usually found just before the subject's signature on the consent form; subjects may read consent forms, but they apparently don't understand them.
As part of a study on the desired features in multimedia consent forms, Jimison and her colleagues
16
asked 29 patients with potential cognitive impairments (depression, schizophrenia, and breast cancer) to read a sample consent form and to suggest improvements to that form. These patients said that the consent form was too long, with too many complex words and confusing details. Table 4 summarizes their nine recommendations for an improved consent form. Jimison
et a
l. found that eight IRB members and 15 researchers made similar recommendations.
Table 4. Suggested improvements to consent forms
Since only one of the nine recommendations-that of using lay language-is measured by readability formulas, it's fair to say that consent form "readability" isn't the sole problem or the sole solution to improved understanding. While most current consent forms appear visually to have been merely typed but not designed, perhaps future consent forms can better incorporate important (but unused) features of the word processing program used to write those forms.
Unless they're willing to do their own research, there is little that IRBs can do to actually improve subject understanding, since understanding consent forms may have less to do with reading grade levels and specific words than with the sheer amount of information that potential subjects are expected to read and understand. IRBs can continue to insist that consent forms be written at junior high reading levels even though there's no evidence that such rewritten consent forms will do much to improve reader understanding. What is needed is more research on the consent process itself and insights into those strategies that actually improve subject understanding. Unfortunately, there is no method for writing a clinical trial consent form that significantly improves subject understanding. Given the nature and amount of information that must be communicated to subjects, perhaps that's an impossible task.
1. M.K. Paasche-Orlow, H.A. Taylor, F.L. Branca, "Readability Standards for Informed-Consent Forms as Compared with Actual Read ability,"
New England Journal of Medicine
, 348 (8) 721-726 (2003).
2. R. Flesch, The Art of Readable Writing (MacMillan, New York, 1949).
3. M. Hochhauser, "Flesch-Kincaid in Microsoft Word is Flawed." (Online letter 10/7/03). CA: A Cancer Journal for Clinicians. [ http://caonline.amcancersource.org/cgi/letters?lookup=by_date&days=365]
4. R.P. Charrow and V.R. Charrow, "Making Legal Language Understandable: A Psycholinguistic Study of Jury Instructions," Columbia Law Review, 79, 306-1374 (1979).
5. T.C. Davis, J.A. Bocchine, D. Fredrickson et al., "Parent Comprehension of Polio Vaccine Information Pamphlets," Pediatrics, 97 (6) 804-810 (1996).
6. D.A. Murphy, Z.H. O'Keefe, A.H. Kaufman, "Improving Comprehension and Recall of Information for an HIV Vaccine Trial Among Women at Risk for HIV: Reading Level Simplification and Inclusion of Pictures to Illustrate Key Concepts," AIDS Education and Prevention, 11 (5) 388-399 (1999).
7. C.A. Coyne, R. Xu, P. Raich et al., "Randomized, Controlled Trial of an Easy-to-Read Informed Consent Statement for Clinical Trial Participation: A Study of the Eastern Cooperative Oncology Group," Journal of Clinical Oncology, 21 (5) 836-842 (2003).
8. M.E.J. Masson and M.A. Waldron, "Comprehension of Legal Contracts by Non-Experts: Effectiveness of Plain Language Redrafting," Applied Cognitive Psychology, 8, 67-85 (1994).
9. D.R. Young, D.T. Hooker, F.E. Freeberg, "Informed Consent Documents: Increasing Comprehension by Reducing Reading Level," IRB: A Review of Human Subjects Research, 12 (3) 1-5 (1990).
10. B.J. Cardinal, "(Un)Informed Consent in Exercise and Sport Science Research? A Comparison of Forms Written for Two Reading Levels," Research Quarterly for Exercise and Sport, 71 (3) 295-301 (2000).
11. T.C. Davis, R.F. Holcombe, H.J. Berkel et al.,"Informed Consent for Clinical Trials: A Comparative Study of Standard Versus Simplified Forms," Journal of the National Cancer Institute, 90 (9) 668-674 (1998).
12. H.A.Taub, M.T. Baker, J.F. Sturr, "Informed consent research: Effects of readability, patient age, and education," Journal of the American Geriatric Society, 34, 606-601 (1986).
13. P. Peduzzi, P. Guarino, S.T. Donta, "Research on Informed Consent: Investigator-Developed Versus Focus Group-Developed Consent Documents, a VA Cooperative Study," Controlled Clinical Trials, 23, 184-197 (2002).
14. M. Hochhauser, "Guest Editorial: Information Overload is the Enemy of Informed Consent," ARENA Newsletter, XIII (4) 6 (2000).
15. HHS. Sample Authorization Language for Research Uses and Disclosures of Individually Identifiable Health Information By a Covered Health Care Provider (2003), http://www1.od.nih.gov/osp/ospp/hipaa/authorization/pdf.
16. H. Jimison, P.P. Sher, R. Appleyard, Y. LeVernois, "The Use of Multimedia in the Informed Consent Process," Journal of the American Information Association, 5 (3) 245-256 (1998).
Empowering Sites and Patients: The Impact of Personalized Support in Clinical Trials
November 26th 2024To meet the growing demands of clinical research, sponsors must prioritize comprehensive support models, such as clinical site ambassadors and patient journey coordinators, who can address operational challenges and improve site relationships, patient satisfaction, and overall trial efficiency.