Imagine the following situation:
“Welcome to today’s parent-teacher conference. Please, sit down. John got an A in Reading, an A in Math and a C in Dramatic Arts. He did well on the STAR9879-8K7 standard test series. You should be very happy; John is on grade level when we compare him against the other students in his class and country,” Mr. Green, the Fourth Grade teacher, explained in a monotone voice. John’s parents both nod in agreement, say “Thank you” and walk out the door.
If you are a parent in North America and have ever been to your child’s parent-teacher night you are probably very familiar with this story. Most western schools base their grades in their report cards on results taken from a standardized testing system. These standardized tests assume that all students are the same and, therefore, try to use the same measuring tool to ascertain how much each student knows. This is similar to trying to measure how much water is in a cup using a meter stick. It may work sometimes, but a ruler was not made to measure volume. It begs the question then, when and how did we start assuming all students were the same and thus can, and should, be measured by the same ruler? Let us first start with the definition of a standardized test, as it is the tool that most schools are presently using to assess their students and, from there, let us look at the history of standardized education.
A standardized test is any form of test that (1) requires all test takers to answer the same questions, or a selection of questions from a common bank of questions, in the same way, and that (2) is scored in a “standard” or consistent manner, which makes it possible to compare the relative performance of individual students or groups of students (Concepts, 2013, para. 1).
Why do we need standardized testing? With the dawn of the industrial revolution, many children were no longer needed to work on the farms and more of them started to enroll in school. This would start the shift from one-room schoolhouses with mixed aged classes, to a factory model where students would be sorted according to age groups in bigger buildings. With this shift, came the idea to standardize the education that each child was receiving (Jacobs & Association for Supervision and Curriculum Development, 2010, p. 1). This standardization meant that schools needed to develop systems that could produce quick and tangible results to assess the student’s knowledge through a factory method, so that the teachers and the school boards could report to parents and governing bodies. Hence the dawn of the standardized lesson, where every student learns the same thing and is assessed in the same way (West, 2012).
Now imagine this alternative sequence of events.
“Welcome to tonight’s student-led conferences. Today your fourth grade son John will sit down with you and talk about the Units of Inquiry, the skills and the concepts we have covered this term,” Mr. Green explains with excitement. John’s parents turn and look at each other with blank stares of bewilderment as they have no idea what the teacher is talking about. Both of John’s parents were very confused by this event as it is not something they were familiar with from their past school experiences as children.
With the onset of the information and technology age, the amount of information available in the world has grown at an astounding rate, and with it has also come the need to change the way we educate our children. The concept that students are empty vessels to be filled with knowledge and then measured is gradually becoming obsolete. The factory-model classroom concept of education is rapidly disappearing (Culbertson & Jalongo, 1999). Many progressive, private elementary schools are no longer focusing on the delivery of a knowledge-based curriculum. They are now trying to be places where the teacher acts as a facilitator and the programs are becoming more interactive and student-driven, with a teaching and learning focus on the lenses of collaboration, concepts, and skills (Hancock, 1997).