Keywords

1 Robust Vocabulary Instruction

As outlined in Chap. 1, the importance of vocabulary knowledge to the reading process cannot be underestimated. Vocabulary knowledge during the preschool years is a strong predictor of future reading success (National Early Literacy Panel, 2008), and vocabulary knowledge during the school years has strong links with both word recognition and reading comprehension (Hiebert & Kami, 2005). For example, a student’s ability to efficiently recognise words, particularly exception words such as ‘yacht’, is facilitated by their vocabulary knowledge (Dawson & Ricketts, 2017). Furthermore, students with better vocabulary skills perform better on tests of reading comprehension across the primary school years. Not surprisingly, the National Reading Panel (2000) stressed the importance of vocabulary instruction for reading success, and hence building teacher capacity in explicit and systematic vocabulary instruction contributes to a language-rich teaching and learning environment.

Research shows that the ability to acquire and express spoken vocabulary is a key to improve and sustain reading comprehension. The size of vocabulary, that is, the number and variety of words that students know, is a significant predictor of reading comprehension in the middle and secondary years of schooling, and of broader academic and vocational success (Beck, McKeown, & Kucan, 2013; Clarke, Truelove, Hulme, & Snowling, 2014). Students who lack adequate vocabulary have difficulty getting meaning from what they read. As a result, they may read less because they find reading difficult. Weak word recognition skills (including phonemic awareness, phonics, and fluency) also contribute to the gap between how much good and poor readers will read and encounter new vocabulary. As a result, they learn fewer words because they are not reading widely enough to encounter and learn new words.

Given this reciprocal relationship between reading and vocabulary growth, and the difficulties faced by struggling readers particularly relating to exposure, explicit instruction in vocabulary is considered one important intervention approach. Explicit or robust vocabulary teaching provides explanations of word meaning, across varied contexts, as well as multiple opportunities to explore and apply the words, which, in turn, can add substantially to the vocabulary growth of all students (Beck, McKeown, & Kucan, 2008). This teaching assists students to grow as readers and thinkers in both fiction and non-fiction, develops a deeper understanding of the words and concepts students are partially aware of, nurtures understanding of new concepts, increases reading comprehension, and enhances both oral and written communication skills (Allen, 1999). For this reason, Robust Vocabulary Instruction was conducted at the Tier 1 level of classroom support to facilitate vocabulary knowledge for all students.

1.1 Robust Vocabulary Instruction Overview

What it means to ‘know’ a word is not a simple notion. Word learning is incremental, that is, understanding a word is usually partial at first and grows with repeated exposures. Dale and O’Rourke (1986) conceptualised word learning as being along a continuum, ranging from never having seen or heard the word before, to having a deep knowledge of the word and its different meanings, and the ability to use the word confidently and accurately in speaking and writing contexts.

As outlined by Beck et al. (2013), research findings point to the need to create classrooms that support and encourage sophisticated word usage through a rich oral language environment characterised by:

  • Multiple encounters (modelling and practice) in a variety of contexts;

  • Rich and extensive opportunities to practise using new words that promote deep processing and more complex levels of understanding;

  • Ample structured reviews to revisit learned words within and across lessons;

  • Numerous opportunities to learn and reinforce vocabulary through wide independent reading;

  • Nurturing an appreciation for words and how they are used; and

  • Explicitly taught word meanings using clear, consistent, and understandable language.

The current project adopted the vocabulary instruction model proposed by Beck et al. (2013), in which vocabulary is classified into three tiers according to a word’s frequency of use, complexity, and meaning. This classification of words into ‘tiers’ is based on the premise that not all words have equal importance when it comes to recommended instructional time.

Tier 1 words, or basic words, are words that usually develop without help, even if slowly. These words are seldom explicitly taught. Words such as ‘dog’, ‘red’, and ‘big’ would be classified as Tier 1 words, for example. Tier 2 words, or interesting words, are very important because of the role they play in literacy. Tier 2 words are the words that characterise written text—but are not so common in everyday conversation. What this means is that learners are less likely to be exposed to these words during everyday conversations. The opportunities to learn Tier 2 words come mainly from interaction with text. Because gaining meaning from written context is more difficult than gaining meaning from oral contexts, learners are less likely to learn Tier 2 words on their own in comparison with the words of everyday oral language. For example, words such as ‘fortunate’, ‘ordinary’, ‘wonderful’, and ‘plead’ would be classified as Tier 2 words. Tier 3 words are generally subject or domain-specific and as such do not have high use. These words are best learned if/when specific need arises. ‘Isotope’, ‘conduit’, and ‘beaker’ are examples of Tier 3 words.

In choosing words for instruction, McGregor and Duff (2015) suggested the following questions are asked:

  • Is this word more likely to occur in written language than in spoken language?

  • Would this word occur across various subject areas?

  • Can the word be explained in student-friendly terms? A student-friendly explanation involves an explanation of a word’s meaning in everyday, connected language.

Words selected for explicit instruction should be drawn from the curriculum; from texts or books read in class; or from assessment materials. The words targeted in the Reading Success project were Tier 2 words drawn from curriculum to the classroom (C2C) texts in conjunction with the speech pathologist and classroom teacher. These C2C materials included texts or books that would be read in class throughout the school term.

Intervention All school staff (across all year levels), including the leadership team, attended a 1-hour professional development (PD) session on Robust Vocabulary Instruction at the beginning of the school year. This PD covered the relationship between vocabulary and reading comprehension, Beck’s tiers of vocabulary, word selection, student-friendly definitions, and the steps in Robust Vocabulary Instruction. The Reading Success project then focused on Robust Vocabulary being implemented within the Year 5 cohort. As part of this project two Year 5 classrooms were ‘control classes’, meaning the teachers received the whole-school training at the beginning of the school year but chose to not actively complete the programme in their classrooms. The speech pathologist and the class teacher implemented Robust Vocabulary Instruction within two ‘intervention classes’ each week.

The intervention was provided at the whole-class level and included adhering to the nine steps for introducing a new word (see Appendix for a lesson plan example). Each week, six words were introduced. In the first four weeks, the speech pathologist introduced 3–4 words and the teacher observed. The teacher then completed the remaining 2–3 introductions and follow up for all words throughout the week, with one follow-up session being demonstrated by the speech pathologist. Follow-up activities aimed at providing the students with opportunities to: (1) use the Robust word in multiple contexts, (2) create links with other words, (3) use the Robust word in a sentence to show the meaning, and (4) ensure repeated exposure to the word. For example, the teacher may ask (1) the student to think of a person or job that relates to the word; (2) the teacher may ask the students to think of a word that has the opposite meaning; (3) the teacher provides an object or topic and asks the student to make a sentence related to the topic and containing the Robust word; and (4) the teacher may ask the student to describe the Robust word so that other students can guess what it is. For more examples of activities, please refer to the two books authored by Beck and colleagues (Beck et al., 2008, 2013). From week 5 onwards, the teachers were responsible for introducing all vocabulary words and completing the follow-up activities for the words during the week. To support fidelity of implementation across classrooms, an observation checklist was developed and completed by the speech pathologist in each classroom during the implementation of the programme.

Assessments All students participated in pre- and post-testing of 12 Tier 2 words, in the first week (week 1) and final week (week 10) of each school term. The words included in the testing comprised both Tier 2 words that would be targeted within the Robust Vocabulary Instruction that term (as described above) and also control words that would not be explicitly taught. The list of Tier 2 words that were included in the pre- and post-testing for Term 1 are shown in Table 5.1. Students were provided with a sheet of paper that included the target word, and space for writing their own definition of the target word and a sentence containing that word. Students were then given the following instructions:

Table 5.1 The list of 12 Tier 2 words that were used for pre- and post-assessment

Please write your name at the top of the sheet.

I will read each word and then I would like you to write what you think the word means in the first box. The meaning could be just one word that means the same as the word I read, or it could be a few words that explain the meaning. I would then like you to write a sentence that uses the word. (read example).This is not a spelling test and it doesn’t matter if you spell words incorrectly in your sentences. Just have a go. If I read the next word and you haven’t finished your sentence, you can come back to it later. You might not know some of these words, so I would just like you to do your best and have a go.

Student responses for each of the 12 words were scored from 0 to 2 according to the procedure outlined in Table 5.2. A total pre- and post-test score was then tallied for each student.

Table 5.2 Score guide for robust vocabulary pre- and post-assessments

1.2 Intervention Results

Pre-and post-assessment results were available for 70 students, 36 of whom attended the intervention classes, and 34 attended the control classes. As explained above, each student was asked to provide the meaning for 12 words (six target words that were explicitly taught and six control words that were not explicitly taught) and provide a sentence that contains that word. Each response was scored on a scale of 0–2, which means the maximum score for this task was 24 (12 for the target words and 12 for the control words). Table 5.3 provides an overview of the results. To determine if the differences between the two groups were clinically significant, i.e. observable in the classroom, effect sizes were calculated and reported using Cohen’s d (Cohen, 1988). Following Cohen’s guidelines, d = 0.2 is considered a ‘small’ effect size, 0.5 represents a ‘medium’ effect size, and 0.8 represents a ‘large’ effect size. As shown in Table 5.3, although both groups showed a significant improvement on the target words following the school term, the intervention classes showed a much larger improvement, with a very large effect size. Only the intervention classes showed a significant improvement on the control words, with a large effect size.

Table 5.3 Group performance on target and control words prior to and the following intervention

1.3 Discussion

The results from this small-scale investigation showed that a Robust Vocabulary Approach, using an integrated service delivery model where speech pathologists and teachers work collaboratively to support vocabulary instruction was effective in enhancing student performance on a vocabulary task in which students were asked to demonstrate their understanding of Tier 2 words. Not only did the students in the intervention classrooms show better performance than their peers in the control classes on the target words post-intervention, they also demonstrated better performance on the control words. These results suggest that robust vocabulary may indeed ‘kindle a lifelong fascination with words’ (Beck et al., 2013).

2 Orthographic Knowledge and Phonological Processing Instruction

Based on the students’ performance on the YARC reading accuracy subtest (Snowling et al., 2012), as outlined in Chap. 2 (step 3), combined with their performance on tasks tapping the skills needed for efficient word recognition skills, 12 students from the Year 4 cohort were invited to participate in an intervention programme aimed at enhancing orthographic knowledge as well as phonological processing skills. These students all demonstrated specific word recognition difficulties, as shown by the YARC reading accuracy, standard score < 85, accompanied by significant word reading difficulties on the CC-2 (Castles, Coltheart, Larsen, Jones, Saunders, & McArthur, 2009; motif.org.au), and low scores on the letter-sound knowledge test (LeST; Larsen, Kohnen, Nickels, & McArthur, 2015; motif.org.au). All students performed within normal limits in reading comprehension (SS ≥ 85 on the YARC) and/or language comprehension (SS ≥ 7 on the CELF-4, Understanding Spoken Paragraphs). These 12 students were then randomly allocated to an intervention or a control group, so that the intervention group received the intervention first, while the control group continued with their usual classroom instruction. As shown in Table 5.4 there were no significant group differences prior to intervention on any of the pre-intervention measures (all p’s > 0.301). At this stage, we administered an additional measure of phonological processing, the Lindamood Auditory Conceptualisation Test (LAC; Lindamood & Lindamood, 2004), which measures the student’s ability to discriminate speech sounds (phonemes) and to analyse the number and order of sounds in spoken patterns. Students are asked to demonstrate their knowledge by matching the number and the colour of small blocks to the number and patterns of sounds (e.g. show me /d//d/: two blocks, same colour; show me /m//ch//ch/, three blocks, first block a different colour to the next two blocks). According to the LAC manual, at the start of Year 5, the minimum recommended score is 86. All students scored below the minimum recommended score. There were no differences in performance between the intervention and the control group.

Table 5.4 Student performance prior to the intervention

2.1 Intervention Overview

Students completed a six-week programme, comprising two sessions per week, one 30-mins individual session, and one 60-mins group (three students) session. Each session covered two components: (1) phonological processing and (2) orthographic knowledge. Although all students completed similar activities (as described below), the specific phonemes or phoneme combinations that were targeted in both components of the intervention were based on the students’ performance on the LeST. The two components of the intervention will now be discussed in more detail below.

Phonological processing This programme explicitly targets students’ phonological processing skills and was firmly based on Gillon and Dodd’s (1995) work, in which ten students with significant reading difficulties participated. The programme itself is based on the Lindamood Phoneme Sequencing Programme, now into its fourth edition (LiPS; Lindamood & Lindamood, 2011), which systematically teaches students to segment, manipulate, and blend the speech sounds (phonemes) in syllables and words. We used Gillon and Dodd’s adapted version of this programme by using traditional letter names to teach the students to encode sounds in syllables (using simple syllable sets, simple syllable chains, and complex syllables). We used the metalinguistic approach recommended in the LiPS programme which drew the students’ attention to changes in syllables by explicitly describing these during the activities. The sessions involved reading and writing (of real and nonsense words) to ensure a transfer of segmentation and blending skills to reading and writing. Finally, as per the process outlined in the LiPS manual, the sessions included the teaching of some basic spelling rules (magic /e/; the /c/ in reading; two vowels go walking). Figure 5.1 provides an overview of the six-week programme.

Fig. 5.1
figure 1

Overview of the six-week programme targeting phonological processing skills

Progress tracking sheets were used to monitor progress and to ensure students only moved on to the next level (simple syllables, CVC, varied, shifts) when they achieved 70–80% success at a certain level. Once a student was proficient with the coloured blocks, i.e. could quickly and accurately work through all levels using the coloured blocks and described the changes that were made with confidence, the student would only work with letter tiles. Use of progress checking sheets allowed for an individualised approach.

Orthographic knowledge Students’ orthographic knowledge was targeted using the commercially available Reading Doctor App Letter Sounds 2 Pro (www.readingdoctor.com). The programme includes 70 of the most common letters-sound patterns and suffixes but was customised for each student based on their LeST results. The students were given access to the App twice a week for approximately 10–15 min, under supervision of the speech pathologist. As outlined on the Reading Doctor website:

Children are taught meaningful associations between the way that letters look and the speech sounds they typically represent through a unique system of visual, auditory and articulatory (speech sound) memory aids, or mnemonics. The teaching system in the app automatically identifies what a child knows, what the child does not know, and which letter sound patterns the child confuses. Letter Sounds™ 2 Pro teaches children to discriminate between confusing patterns, and strengthens weaknesses in letter-sound understanding.

2.2 Intervention Results

All students were re-assessed after the six weeks of intervention on measures of orthographic knowledge (LeST), single word reading (CC-2), and reading accuracy (YARC RA). Repeated measures t-tests were used to calculate changes in performance from pre- to post-intervention. As shown in Table 5.5, the students in the intervention group demonstrated significantly greater gains (p < 0.05) on measures of letter-sound knowledge and single word reading (regular and non-words). No significant group differences were found on measures of irregular word reading or reading accuracy.

Table 5.5 Mean group performance before and after intervention (SD in brackets)

To determine if the differences between the two groups were clinically significant, i.e. observable in the classroom, effect sizes were calculated and reported using Cohen’s d (Cohen, 1988). Following Cohen’s guidelines, d = 0.2 is considered a ‘small’ effect size, 0.5 represents a ‘medium’ effect size, and 0.8 represents a ‘large’ effect size. As shown in Fig. 5.2, although both groups demonstrated progress over time, the intervention group made significantly more progress than the intervention group on measures of orthographic knowledge, regular, and non-word reading.

Fig. 5.2
figure 2

Effect size comparison (Cohen’s d) of the progress made in the intervention versus control group following six weeks of intervention

2.3 Discussion

The six-week intervention programme was effective in boosting students’ performance in important print-related skills that underlie successful word recognition, i.e. orthographic knowledge and decoding of regular and nonsense words. Unfortunately, no generalisation was observed to the students’ reading accuracy performance on the YARC. The most reasonable explanation is that the students need more time to apply their skills to sentence-level reading, which was not addressed in the intervention itself (i.e. the focus was on single words). Moreover, although most of the students had made significant progress in single word reading (improving more than 1 z-score), closer inspection showed that many of the students still scored significantly below expectations (i.e. z-score < −1).

These results raise important questions regarding the timing of the intervention (Year 5). Research suggests intervention targeting word recognition difficulties is more effective during the earlier school grades than during the later years (see Wanzek & Vaughn, 2007). Early detection of reading difficulties is possible as long as sensitive and specific assessment tasks are used (see Chap. 3). By using an RtI approach, students showing early signs of dyslexia (i.e. fail to make satisfactory progress in word recognition despite high-quality classroom instruction) can then receive timely intervention. Another important issue to consider is the duration of the intervention. Although little is known about the exact dosage needed to effect more significant changes in reading accuracy, research suggests more extensive intervention is needed than the six weeks we provided as part of the Reading Success project (see Al Otaiba, Gillespie Rouse, & Baker, 2018). Finally, it is not clear what the active ingredients were as the intervention contained a combination of phonological processing and orthographic knowledge tasks, using both examiner- and computer-assisted instructional methods.

3 Expository Text Structure Intervention

Based on the students’ performance on the YARC reading comprehension subtest (Snowling et al., 2012), followed by investigation of their reading accuracy performance on the YARC, and their performance on the Understanding Spoken Paragraphs subtest of the CELF-4 (Semel, Wiig, & Secord, 2006), as outlined in Chap. 2 (steps 1 and 2), eight students from the Year 4 cohort were invited to participate in an intervention programme aimed at enhancing their expository text structure knowledge. Six of these students demonstrated specific comprehension difficulties, as shown by the discrepancy between performance on the YARC reading comprehension and reading accuracy subtests (standard scores) accompanied by poor performance (i.e. standard score < 7) on the Understanding Spoken Paragraphs subtest of the CELF-4. The reading results for these students are shown in Table 5.6.

Table 5.6 Student performance prior to (end of Year 4) and following the intervention (mid Year 5)

3.1 Intervention Overview

Students completed a six-week programme focusing on oral language in an expository context. The programme was adapted from Clarke, Snowling, Truelove, and Hulme (2010), who used a randomised control study design to investigate the effectiveness of three interventions aimed at improving the reading comprehension performance of 8- to 9-year-old students with specific reading comprehension deficits: text comprehension training, oral language training, and both trainings combined. All students received 30 h of intervention over 20 weeks (three 30-min sessions per week; two in pairs, one individually) implemented by a trained research assistant. Results from Clarke et al.’s (2010) study showed that the oral language groups made the greatest gains in reading comprehension following intervention. We adapted the intervention in the following ways:

  • Duration and dosage. The intervention lasted six weeks (two 60-mins group sessions per week) for a total of 12 h of intervention per student. All sessions were in groups of four.

  • Agent. The intervention was delivered by the speech pathologist.

  • Focus. Our focus was on expository text, as opposed to narrative.

Similar to the Clarke et al. (2010) study, all sessions contained a range of evidence-based techniques, including “comprehension monitoring, cooperative learning, graphic organizers for story structure training, question answering and generating, summarisation, and multiple-strategy teaching” (p. 1108). Each session had the same structure and contained the four components of vocabulary, graphic organiser, reciprocal teaching, (figurative language, if applicable), and spoken expository (see Table 5.7). All sessions adhered to the following principles: (a) rich interaction and high-quality contextualised discussion; (b) integrate opportunities for relating material to personal experiences, and (c) exploration of vocabulary and spoken expository through varied games and activities, as well as worksheets. For further details see also Clarke et al. (2014).

Table 5.7 Session overview of expository language intervention programme (adapted from Clarke et al., 2010)

The following five types of expository discourse were targeted during the intervention: (1) description, (2) procedure sequence, (3) comparison, (4) cause and effect, and (5) problem and solution. Each type of discourse had an accompanying graphic organiser, which was printed on A3 paper and used during the session. Graphic organisers were sourced online, for example, by performing a Google search or by visiting the www.readwritethink.org website. All five types of expository discourse were briefly introduced during the first session, and students were informed they would focus on a different type each week. The topic or content of the expository passages was matched to the topics that were covered in the classroom during those six weeks. For example, Unit 3.1 focused on ‘the Riddle of the Black PantherThe Search’ (Education Services Australia). As a consequence, the first expository session focused on the Black Panther (What’s a Black Panther, Really? National Geographic, 2015). Other sessions included comparing rugby union to soccer, why native goannas are dying (invaders), and flying foxes.

In week 6, the content of the previous five weeks was covered by introducing a topic (in this case ‘flying foxes’) and asking students what type of expository they could think of for the same topic, using the graphic organisers as prompts.

3.2 Intervention Results

All students were re-assessed after the intervention on Form A of the YARC to investigate their reading comprehension performance (standard score and age equivalence). Repeated measures t-tests were used to calculate changes in performance from pre- (i.e. at the end of Year 4) to post-intervention. As shown in Table 5.6, the students in the intervention group demonstrated larger gains in reading comprehension than the control group who participated in the regular classroom activities, as shown by the effect size (Hegde’s g, whereby small effect [cannot be discerned by the naked eye] = 0.2; medium effect = 0.5; large effect [can be seen by the naked eye] = 0.8). Although the progress in reading comprehension made by the intervention group was not statistically significant (p > 0.05), the intervention group’s performance showed a large effect size for both their standard score and their age equivalence (i.e. > 1 standard deviation change) compared to the control group who showed < 0.4 standard deviation change. These results indicate the experiment was underpowered (very small sample size), but also suggest the students in the intervention group made noticeable gains in reading comprehension following the intervention.

3.3 Discussion

The results from this pilot study showed the potential effectiveness of a short intensive intervention aimed at enhancing students’ expository structure knowledge. Although replication is needed with larger numbers, our results are in line with those from previous studies (e.g. Clarke et al., 2010).

4 Supplementary Whole-Class Oral Language and Emergent Literacy Intervention

In response to the high literacy needs of the middle year students, the school implemented a supplementary whole-class oral language and emergent literacy intervention in the foundation year of schooling. This programme was based on Read It Again-PreK (Justice & McGinty, 2010), which is freely available online (see reference list), but adapted, with permission for the local context. The adapted version is called Read It Again—FoundationQ! (Department of Education, Training and Employment, 2013), and can be downloaded for free from the same website (see reference list).

4.1 Intervention Overview

Read it Again—Foundation Q! is a scientifically based oral language programme designed to develop and strengthen student’s early foundations in four key areas of language and literacy—vocabulary, narrative, phonological awareness, and print knowledge:

Read It Again - FoundationQ! is designed to systematically build students’ language and literacy abilities in four areas. The scope of instruction encompasses: • Vocabulary - receptive and expressive repertoire of words • Narrative - ability to understand and produce extended discourse that describes real or fictional events occurring in the past, the present, or the future P a g e | 4 Read It Again – FoundationQ! • Phonological awareness - sensitivity to the phonological - or sound - structure of language • Print knowledge - interest in print, knowledge of the names and distinctive features of various print units (e.g. alphabet letters, words), and the way in which different prints may be combined in written language. (Department of Education, 2013, p. 3)

Read It Again is firmly based on current research regarding how adults can support children’s language and literacy development using systematic and explicit instruction presented in highly meaningful literacy events such as storybook reading. A key feature of Read It Again is the repeated use of children’s storybooks as a way to enhance language and literacy development. Studies indicate that repeated book reading influences both story-related vocabulary and story-related comprehension and that the average effect size for the relationship between repeated book reading and outcomes is larger when a book is read four or more times (Trivette, Simkus, Dunst, & Hamby, 2012; Zucker, Cabell, Justice, Pentimonti, & Kaderavek, 2013).

FoundationQ! is aligned to the Australian curriculum for the foundation (first) year of schooling and can be delivered as either a Tier 1 (differentiated) or Tier 2 (focused) teaching strategy within a response-to-intervention model. In this project, FoundationQ! was delivered as a Tier 1 strategy across all Prep (foundation year of schooling) classrooms. As explained in Chap. 1, differentiating instruction is a critical feature of a response-to-intervention model and advocates for active planning for student differences to ensure that every student is engaged, participating, and learning successfully (Goddard, Goddard, & Tschannen-Moran, 2007; Tomlinson, 2000).

Read It Again—FoundationQ! incorporates six differentiation strategies (three Too Easy strategies and three Too Hard strategies) that educators can use to scaffold students’ performance on similar tasks or activities. To enhance teachers’ use of the differentiated instructional strategies, a capacity-building model including coaching by the speech pathologist was implemented. The provision of coaching by speech pathologists is a natural extension of their consultative and collaborative services within a response-to-intervention framework and has been found to be effective when combined with in-service workshops (Milburn, Weitzman, Greenburg, & Girolametto, 2014). The professional development programme implemented employed a combination of a 1.5 h workshop that explained the intent, content, and structure of the programme and provided opportunities to identify and apply the Too Easy and Too Hard differentiation strategies; and individual coaching sessions incorporating demonstration lessons, scheduled observations, and instructional feedback.

To increase teachers’ awareness of individual student needs and to assist teachers in differentiating instruction, Read It Again—FoundationQ! includes a student progress checklist (see the website for a copy), which measures individual students’ progress against the learning objectives in each of the four domains specific to FoundationQ!. The checklists are administered at three separate points (after week 2, week 12, and week 21) during the 30-week intervention period. Development of skills is rated by teachers as:

  • Acquiring: student never or occasionally demonstrates the skill

  • Building: student often demonstrates the skill, but is not yet consistent and/or requires assistance, or

  • Competent: student consistently demonstrates the skill

4.2 Intervention Results

To obtain preliminary data regarding the effectiveness of the implementation of this Tier 1 intervention initiative in reducing the overall number of students requiring additional support in oral and written language, we compared the performance of the Year 4 cohort (n = 78) who had not participated in the Read It Again—FoundationQ! programme (see Chap. 3 for specific cohort results) to the performance of the Year 2 cohort (n = 69; see Chap. 3 for specific cohort results when these students were in Year 1) on the YARC. On paper, these cohorts were similar with the Year 4 cohort comprising 6% indigenous students; 31% English as a Second Language (School ICSEA score 1005) and the Year 2 cohort comprising 6% indigenous students, and 35% ESL (ICSEA, 2013).

As shown in Fig. 5.3, a significantly higher percentage of students in Year 2 performed within normal limits (i.e. SS ≥ 85) on the YARC reading comprehension (91.5% vs. 57.1%). Similar results were seen for reading accuracy and reading rate.

Fig. 5.3
figure 3

Cohort (Year 2 vs. Year 4) performance on the YARC pre-and post-implementation of Read It Again (RIA): percentage of students performing within normal limits in reading comprehension, reading accuracy, and reading rate

4.3 Discussion

Implementation of Read It Again—FoundationQ! seems to be successful in lifting the literacy success rates of students attending the school. These findings provide preliminary evidence of the effectiveness of implementing this programme that was developed in the US (McGinty & Justice, 2010) but adapted for the Australian context, in Australian classrooms (see also Lennox, Westerveld, & Trembath, 2018). Further research is now needed to determine the effectiveness of these types of interventions for children with lower oral language ability (see Gillon et al., 2019).

5 Chapter Summary

This chapter presented the results from four evidence-based intervention initiatives. In the reading to learn cohort, Robust Vocabulary Instruction was provided at the whole-class level, with results showing larger gains in vocabulary knowledge in the intervention classes compared to the control classes. The other two intervention initiatives for this cohort of students involved students who had been identified with specific areas of weaknesses in spoken and/or written language skills impacting their reading comprehension performance. Expository structure intervention was provided to a group of students with specific comprehension difficulties, whereas a group of students with specific word recognition difficulties participated in an orthographic knowledge and phonological processing intervention programme. Results from these interventions were positive but modest, highlighting the importance of early identification of reading difficulties to enable more timely intervention. In the learning to read cohort, Read It Again—FoundationQ! was implemented at whole-school level in all Prep classes. Although our research design did not allow for firm conclusions regarding the effectiveness of this initiative, cohort mapping results showed a significant improvement in reading performance over time prior to and following the implementation of Read It Again—FoundationQ within the school. Implementation of the five-step assessment framework introduced in this book will now be needed to identify those students who need additional support, as our previous research has shown that implementation of this type of whole-class supplementary oral language and emergent literacy intervention alone may not be sufficient for long-term reading success (Lathouras, Westerveld, & Trembath, 2019).