Our strong view is that many current assessments are an inadequate, ineffective and outdated methods of assessing student progress in reading. They are not evidence-based because they do not assess the skills that decades of research have shown are fundamental to reading skill acquisition. They are in fact symptomatic of an approach to literacy instruction and remediation that is failing Australian students. As you will see below, this view is shared by Australia’s leading reading researchers.

Unquestionably, continuous assessment of reading progress is crucial for all students who are learning to read. Assessment has remained static for decades despite being based on the scientifically unsupported “three cueing systems” model of word recognition. According to world renowned reading researcher Dr. Louisa Moats:

“Contextual guessing strategies are supported by the cueing systems model of word recognition which has no basis in reading science. According to this theory, students are said to use grapho- phonic cues, semantic or meaning cues, and syntax or contextual cues to recognize words. In practice, the emphasis is on anything but the links between speech sounds and spelling. Unfortunately, balanced literacy students are learning strategies that poor readers rely on, not what good readers know” (p. 20).

Moats, L. (2007). Whole language high jinks: How to tell when “Scientifically-based reading instruction” isn’t. Available from https://www.ldaustralia.org/client/documents/Teaching%20Rea ding%20is%20Rocket%20Science%20-%20Moats.pdf

According to leading researchers, the ‘three cueing systems’ model of word recognition that current assessments are based on severely under emphasises the evaluation and explicit teaching of decoding ability: the ability to read unfamiliar words by using letter-sound relationships to blend speech sounds together and teach oneself new words.  Many assessments have been highly criticised by reading scientists for being a ‘Whole Language’ approach to literacy instruction. As we are sure you are aware, Australia’s leading reading researchers argue that a ‘Whole Language’ approach to the teaching of reading is largely responsible for the 350,000 students in Australia who can only read at a minimal level, or lower, despite the billions of dollars that have been spent on trying to improve literacy outcomes (See http://www.fivefromfive.org.au/ and http://www.readingdoctor.com.au/s/Jennifer-Buckingham-Why- Jaydon-Cant-Read.pdf).

With a first-line reading assessment tools ignoring the fundamental role of decoding ability (and its sub-skills), we are setting another generation of South Australian learners up for potential failure. We are promoting a system that continues to ignore the most important skills in learning to read as well as the importance of training teachers in scientifically-based reading assessment, instruction and remediation.

Research has confirmed and reconfirmed the skills that are essential for reading development and reading assessment, and these skills are not part of many assessments and do not provide teachers with specific information about crucial aspects of reading development such as: phonological awareness, letter/sound knowledge, decoding, reading fluency and language comprehension. Instead, they assess a student’s ability to recognise words based on contextual guessing. To compound the problem, these assessments have a high cost of administration. We cannot continue to ask taxpayers to fund expensive and ineffective forms of reading assessment.

We directly consulted with some of Australia’s leading researchers regarding their views.


Eminent researcher Dr Jennifer Buckingham, research fellow at The Centre for Independent Studies, wrote:

As an advocate of evidence based practice, I am not in favour of using running records for assessment or ‘PM’ reading levels as a measure of student progress in reading. Assessment for beginning and struggling readers should ideally provide objective and standardised measures of the fundamental skills for early reading acquisition identified in scientific reading research – phonemic awareness, decoding accuracy, reading fluency, vocabulary and comprehension. Running records, which are based on connected, familiar texts, do not isolate specific deficits in the fundamental reading skills. It is difficult to diagnose reading difficulties and provide appropriate intervention without this information. There are numerous tests that provide more valid, sensitive and reliable measures of reading ability.


Dr. Kerry Hempenstal, Associate Professor in Psychology, RMIT, Educational Psychologist and leading Australian reading scientist, stated the following in an audio recording he prepared for us:

Assessment should provide information to guide evidence based instruction. Unfortunately, Running Records can’t provide that because it is based on a faulty view of reading development. In a similar vein, coming out of the discredited whole language approach, PM readers are inconsistent with what we know about reading and its development, and how it is best taught.

Unfortunately, the PM readers provide what is often called ‘inconsiderate texts’, which prompts guessing and memorising, rather than the important decoding capacity. A better school based assessment alternative would be curriculum based measurement beginning at school entry, and focusing on the big 5 components of reading development, known to be especially important. They are, phonemic awareness, phonics, fluency, vocabulary and comprehension.


Professor Kevin Wheldall, Emeritus Professor and Director of MULTILIT Research Unit, Macquarie University wrote:

Running Records

  •  Developed by Marie Clay of Reading Recovery fame and closely wedded to the model of reading instruction she espouses. So if you are not using that model, why use Running Records?
  • More of a ‘diagnostic’ test than a measure of progress, but again within Marie Clay’s model.
  • Dubious psychometric credibility (reliability and validity) and somewhat subjective. Interrater reliability might be suspect too.
  • Not highly regarded by non Reading Recovery reading experts.

 PM Readers

  •  Not really appropriate for the early stages of learning to read if synthetic phonics is deployed. Decodable readers would be preferable. Too many letter sounds introduced too soon and not in best order.
  • Used as a measure of progress, so called PM levels, are dubious in my view. A measure needs to ensure that levels are ranked in order of difficulty and to have equal intervals between levels i.e. the ‘distance’ in terms of difficulty between level 3 and 4 should be the same distance as between level 4 and 5. In Reading Recovery levels, for example, some supposedly higher level books proved easier to read than lower level books and the intervals between levels was not the same! PM levels not empirically determined as far as I know.
  • Might be suitable for practice once kids have learned basics of phonological decoding.


These comments provide an indisputable message: these assessments are not in the best interests of South Australian students. The evidence is overwhelming. There is an unacceptable gap between research into reading and assessments as well as the Whole Language methodology that they are based on, according to Australia’s leading reading academics.

So what can replace current assessments? It is the scope and sequence of phonics and advanced word study skills, out of context, that truly lets you know what a child knows and does not know (Tolman, 2016). We need to use tools with the greatest potential for success in identifying learners with reading difficulties and which skills are in deficit. A comprehensive reading evaluation should not only provide information regarding a child’s skill levels with respect to others of the same age or grade; it should also tell us what children know and what they are ready to learn. A well-designed reading skill assessment battery should include measures of the crucial aspects of reading development: phonemic awareness, phonics (letter-sound knowledge, decoding and encoding), reading fluency, vocabulary skills and language comprehension.

In summary, many assessments are not based on evidence and are not in the best interests of our students. Australia is lagging behind other nations in terms of adapting scientifically-based approaches to literacy instruction and remediation. For example, the ‘Primary National Strategy’ document from England (2006) states:

“….attention should be focused on decoding words rather than the use of unreliable strategies such as looking at the illustrations, rereading the sentence, saying the first sound or guessing what might ‘fit’. Although these strategies might result in intelligent guesses, none of them is sufficiently reliable and they can hinder the acquisition and application of phonic knowledge and skills, prolonging the word recognition process and lessening children’s overall understanding. Children who routinely adopt alternative cues for reading unknown words, instead of learning to decode them, later find themselves stranded when texts become more demanding and meanings less predictable.”


Many current assessments need to be replaced, and we feel that such a change presents an incredible opportunity for our state to provide leadership in implementing evidence-based reading instruction and intervention for our students. While the US and the UK have adapted the recommendations from their national enquiries into the teaching of literacy, none of our states have yet implemented the recommendations from our own, according to a keynote lecture delivered at the 2015 Speech Pathology Australia Conference by Professor Pamela Snow.

In a recent blog post, Professor Snow states:

It is disappointing that teacher educators are not coming out in force calling for a change of tack to providing more evidence- based content in teacher pre-service education. Non-evidence- based approaches such as the much-loved (by teachers and teacher educators) three cueing strategy rarely (never?) rate a mention in the commentaries around poor NAPLAN performance, and nor does a need for implementation of the recommendations of the 2005 National Inquiry into the Teaching of Literacy.


The first recommendation from the 2004 Australian National Enquiry into the teaching of literacy was:

  1. The Committee recommends that teachers be equipped with teaching strategies based on findings from rigorous, evidence- based research that are shown to be effective in enhancing the literacy development of all children.

Replacing inaffective assessments with rigorous, evidence-based tools for reading assessment is an important step towards evidence-based reading instruction and remediation. We have an opportunity to help our teachers enhance the literacy development of South Australian students by preventing them from wasting time and resources on an ineffective, expensive and outdated approach to reading assessment.


Enjoyed reading this article? Why not help by making a donation.

Calling for New Assessments in SA