In Lewis Carroll’s classic Through the Looking-Glass, the following dialogue takes place between Alice and Humpty Dumpty: "When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less." "The question is," said Alice, "whether you can make words mean so many different things." "The question is," said Humpty Dumpty, "which is to be master—that's all."
Humpty’s take on who controls the meaning of words still rings true today in many fields and especially in education, where the terms “scientifically based” and “evidence-based” are frequently used indiscriminately. In the most basic terms, scientifically based research means there is reliable evidence that the program or practice works (Whitehurst, 2001), while evidence-based is commonly defined as the combination of “professional wisdom” (that is, based on personal experience) and the best empirical evidence. Far too often, these terms are applied inappropriately and at the sole discretion of publishers, school districts, academics, advertisers, and a host of others who lack the knowledge necessary to discern among scientific evidence, personal preferences, and hype. Nowhere are the disastrous effects of this type of mislabeling and misunderstanding more profound than in the teaching of reading.
Scientifically based research means there is reliable evidence that the program or practice works, while evidence-based is commonly defined as the combination of “professional wisdom” (that is, based on personal experience) and the best empirical evidence.
The long, torturous battle about reading instruction began in earnest with the publication of Ken Goodman’s Reading: A Psycholinguistic Guessing Game (1967) and Frank Smith's Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read (1971). Both were seminal in moving Whole Language philosophy from academic circles into classrooms. Misappropriating the mantle “scientifically based,” proponents of Whole Language theorized that learning to read occurs naturally in the same way that learning to speak develops naturally. Years of research studies and cognitive science research (Chall,1967; National Reading Panel, 2000; Wolf, 2018; Shaywitz and Shaywitz, 2020) that preceded and followed Goodman and Smith’s publications have conclusively proven that, that contrary to the assertions of Whole Language advocates, skilled readers rely more heavily on decoding skills (knowledge of letter-sound correspondences) than context clues when learning new words. The alphabetic principle must be explicitly taught, not simply “discovered” as prescribed by Whole Language devotees. Despite a preponderance of evidence discrediting it, in the 1980s Whole Language became a widely used instructional model for teaching reading in the United States and other English-speaking countries.
Whole Language continues to be refuted by research studies that clearly and unequivocally identify scientifically based instructional practices as the most effective method for teaching reading and by the poor performance of students that have been subjected to Whole Language instruction (Rayner, Foorman, Perfetti, Pesetsky & Seidenberg, 2000; Moats, 2000; National Reading Panel, 2000; Moats, 2000, Moats, 2007; Goswami and Bryant, 2016; Gough, Ehri, Treiman, 2017). In the most generous terms, the activities called for by the Whole Language approach can be used to make reading more fun and interesting, but they are not a substitute for reading instruction that systematically and explicitly teaches decoding skills. The response of the proponents of Whole Language to this barrage of criticism would have made Humpty Dumpty proud —Whole Language simply morphed into Balanced Literacy, and to this day Whole Language lives on in classrooms across the United States disguised as Balanced Literacy. This reincarnation was accomplished in large part by describing Balanced Literacy programs as “scientifically based” or “evidence-based.”
Whole Language continues to be refuted by research studies that clearly and unequivocally identify scientifically based instructional practices as the most effective method for teaching reading.
As Louisa Moats explains in Whole Language High Jinks: How to Tell When “Scientifically-based” Reading Instruction Isn’t (2007) the term “scientifically based” has been hijacked by some reading programs that are not in fact based on scientific research. In the introduction to Moats’ guide, Chester Finn states, “…a recognized reading expert explains how educators, parents, and concerned citizens can spot ineffective reading programs that may hide under the ‘scientifically-based' banner. Although the term ‘whole language’ is not commonly used today, programs based on its premises remain popular. These approaches may pay lip service to reading science, but they fail to incorporate the content and instructional methods proven to work best with students learning to read. Some districts openly shun research-based practices, while others fail to provide clear, consistent leadership for principals and teachers, who are left to reinvent reading instruction, school by school. The purpose of this guide is to help educators and parents spot programs that truly are research based—and those that are not.” Mark Seidenberg (2012) confirms this disregard for science, stating, “There is an enormous disconnect between science and educational practice. We occupy two different worlds. I believe this is an enormous waste. Many people on the education side dismiss this research as completely irrelevant to their mission. Teachers aren’t exposed to this research as part of their training.”
Not much has changed since Moats’ informative paper (2007) was published. In July 2020, Education Week published a report entitled “The Most Popular Reading Programs Aren't Backed by Science.” This report confirms that despite a solid understanding of what constitutes effective reading instruction, the difficulty of determining whether a reading program deserves the labels scientifically based or evidence-based persists. Education Week’s review (2020) of the top five most popular reading programs revealed that, contrary to the marketing tools that often accompany these programs, “… analysis of the materials found many instances in which these programs diverge from evidence-based practices for teaching reading or supporting struggling students.” This report also references Mark Seidenberg’s (2017) Language at the Speed of Sight: How We Read, Why So Many Can't, and What Can Be Done About It, in which he states, "[These reading programs] are put out by large publishers that aren't very forthcoming.”
Despite a solid understanding of what constitutes effective reading instruction, the difficulty of determining whether a reading program deserves the labels scientifically based or evidence-based persists.
Given the significant resources that publishers invest in marketing their reading programs, how can parents and educators determine whether a particular program or practice is scientifically based? Humpty Dumpty maintained, “When I use a word, it means just what I choose it to mean—neither more nor less." Should publishers be able to do the same? Authoritative sources answer this question with a resounding “no.”
The U.S. Department of Education (Smith, 2003) offers valuable guidance that clarifies the meanings of “scientifically based” and “evidence based.” According to provisions contained in the No Child Left Behind legislation (2002), “To say that an instructional program or practice is grounded in scientifically based research means there is reliable evidence that the program or practice works.” Whitehurst (2001) clarifies the concept of “reliable evidence” by offering a hierarchy of the quality of evidence obtained by different research methodologies:
- Randomized controlled trials
- Quasi-experimental, including pre- and post- data
- Correlational studies with statistical controls
- Correlational studies without statistical controls
- Case studies
All these research methodologies can produce “reliable evidence.” However, randomized trials produce the highest quality of evidence, and the quality of the evidence decreases in descending order for each of the other research methodologies listed. Determining whether instructional programs or practices are grounded in these types of research and, therefore, warrant the label scientifically–based, is not a simple matter.
Grounded in years of scientifically based research on the practice of reading instruction, the report from the National Reading Panel (2000) unequivocally established that phonemic awareness, phonics, fluency, vocabulary, and comprehension are skills that are critical to early reading success. Reading programs that fully incorporate these five elements into their materials and methods are termed “scientifically based” reading programs (Moats, 2007). Despite claiming that they are scientifically based, Balanced Literacy programs may contain these elements, but do not fully incorporate them. In addition, these programs frequently deviate from being evidence based in the instructional practices used to deliver them (Education Week, 2020).
Further complicating the issue, the terms “scientifically based” and “evidence-based” are often used synonymously; in reality, there is a significant difference between the two terms. Whitehurst (2001) describes “evidence-based” as the combination of “professional wisdom” that is based on personal experience and the best empirical evidence. This definition leaves “evidence-based” open to interpretation. For example, using the term “craft knowledge” instead of “personal experience,” Murphy (Education Week, 2019) argues, “Scientific evidence is not the only source of knowledge nor is it the source of knowledge that always holds high ground in decision making.” He proposes that the craft knowledge of teachers should be given equal standing. Given that craft knowledge is acquired through individual experience and preparation, the validity of that assertion must be questioned, particularly when it is applied to reading instruction.
There remains a significant disconnect between the preparation teachers need to be successful and the preparation they receive in their pre-service and graduate education courses. Year after year, The National Council on Teacher Quality (2020) cites colleges and universities for their substandard preparation of teachers. The International Dyslexia Association (2018) bolsters this conclusion by citing research findings confirming “…that many teachers, even those with experience and credentials have limited knowledge about phonemic awareness and phonics and their importance for students at risk for reading problems.” Without that knowledge, teachers’ craft knowledge cannot be equated with teaching practices that have been scientifically validated. Considering this, it is understandable that while teachers make hundreds of instructional decisions every day, classroom teachers rarely make decisions about what curriculum to use. In an Education Week survey (2020), 65% of teachers said that their district selected their primary reading programs and materials, while 27% said that the decision was up to their school.
Ideally, educators in decision-making roles at the district and building levels have the ability to see through marketing hype and determine which reading programs and practices are actually scientifically based. In order for these programs to be effectively utilized, teachers and administrators must have sufficient knowledge of the “empirical evidence” that Whitehurst referenced. Only then can they apply their personal knowledge of the unique needs of their students to make truly evidence-based decisions about the reading programs and practices that they employ.
Like Humpty Dumpty, Whole Language and Balanced Literacy advocates have tried to make “scientifically based” and “evidence-based” mean what they choose them to mean, and others continue to use the same tactic. The principle of caveat emptor must be applied whenever educational programs and practices tout that they are scientifically or evidence-based; our students depend on it.
The principle of caveat emptor must be applied whenever educational programs and practices tout that they are scientifically or evidence-based.