Make font smaller  Make font larger

Summer 2006

Innovation in education

Innovation in the classroom

What constitutes ‘innovation’? How do we know it when we see it and how would we capitalise on it? Patrick Griffin looks at innovation in the classroom.

‘Innovation is a process of making improvements by introducing a new idea, method or device and its successful exploitation such that a change is brought about. The change in turn creates a new dimension of performance for persons or an organisation.’ This definition has been compiled from a number of sources.

So, innovation has at least two parts: it must be new and it must change something. While it is something new, the term ‘new’ can be interpreted as recent or novel. If we use only ‘recent’ then a recently published textbook could be an innovation. If we use ‘novelty’ then it should have no precedent in practice. Whether or not an innovation is ‘novel’, might depend on who is using it. The innovation might lead to either radical or small changes in outcomes, processes or structures but they usually emerge as a solution to a problem.

In the school context, innovation may be linked to performance and growth through improvements in efficiency, productivity, quality, competitive positioning, market share, etc.

There are numerous examples of innovations and in education we are constantly exposed to new ideas, products and procedures that aim to improve learning. In Victoria in the 1970s the Federal Government 0ffered teachers an ‘innovation grant’ if the teachers could offer a new idea in teaching. Teachers applied for a grant of a few hundred or a few thousand dollars. Its purpose was to encourage lateral thinking in teaching and learning. But what was the problem that this set out to solve? Education had been centrally controlled, syllabus and examination driven. Teaching focused for decades on the Merit, Intermediate, Leaving and Matriculation certificates and teaching was directed at getting students to pass these exams. The innovations commission and the grants were established to free these controls on teaching and learning.

In Hong Kong during the late 1990s, a Quality Education Fund was established consisting of more than 5 billion Hong Kong dollars. The purpose of this fund was similar. Teachers and educational organisations were encouraged to think ‘outside the square’ and introduce something new into the system to free the constraints of the examination system and the tight control these had on teaching and learning.

There is a consistent theme. An innovation is usually the solution to a problem. A problem creates a need for a solution, a change or a new way of thinking. Innovation is typically understood as the introduction of something new and useful but it is distinct from problem solving.

Innovation is not an invention or the creation of new tools or the novel compilation of existing tools. Is it the same as improvement? The literature blurs the concept of innovation with many other terms. Change and creativity are also words that may often be substituted for innovation. What is it about an innovation that makes it different from all these other things? In business, for example, an innovation is not an innovation until someone successfully implements it. In education, should an innovation be treated the same way? That is, a new approach to teaching should not be considered an innovation until students actually learn better because of its implementation.

A problem unresolved?

Teachers are confronted with a wide range of tests of reading, and which one to use or how to use the test results is often confusing. The Catholic Education Office, Melbourne, established a project in 2004 to identify a series of standardised tests that teachers could use to map student development onto the Victorian Essential Learning Standards (VELS) and to monitor student progress as starting points for learning.

What was the solution?

The tests were: ‘Tests of Reading Comprehension’ (TORCH), ‘Achievement Improvement Monitor’ (AIM), Developmental Assessment Resource for Teachers (DART) and ‘Prose Reading Observation, Behaviour and Evaluation of Comprehension’ (PROBE). They were evaluated to see whether they provided similar information and could be used interchangeably to report student progress. In addition the teachers learned how to use tests to target instruction for all students and to identify the level of intervention for even the best performing students.

More than 70 teachers administered the tests to 1642 students. At both the beginning and end of an 8-month period, teachers were asked to administer two reading assessments: a Year 3 AIM reading test and either a TORCH, PROBE or DART test.

It was hard work for the teachers. They had to mark the tests and record for every student whether the answer to each question was correct. In addition, every test question in the four tests was examined to identify the cognitive skill required to get the right answer. PROBE test questions were already sorted into six classifications—literal, reorganisation, inference, vocabulary, evaluative, reaction—rather than skills for each individual question. In some cases, it was difficult to distinguish between the classifications and identify or confirm the skill required. Following this analysis, a common Developmental Progression of Reading Comprehension Skills was produced. The progression had eight levels and students were placed at one of the levels according to their test scores.

What was new?

The teachers involved in the project realised that it was not enough to give just a score or a level on a developmental scale. While PROBE provided some advice and is commended for this, the scores from the DART, TORCH and the AIM reading tests had greater stability for assessments for progress reporting. Combining four commercial tests onto a single scale gave the schools considerable freedom in monitoring, but the use of the scale to target instruction was ‘novel’ and a ‘recent’ use of such scales.

What was the research base?

The problem of what to use in monitoring reading development using standardised tests was addressed using a technical procedure called ‘test equating’ based on item response modelling. This mapped three tests into the same reading scale and enabled the identification of levels of reading development. The levels allowed the teachers to target instruction for students based on where the students were ‘ready to learn’. The teachers had identified Vygotsky’s zone of proximal development. New classroom management procedures were needed and these were shared at project discussion meetings among the school professional learning team leaders. The team leaders shared all the solution strategies with the team members back at school. They then documented the strategies and the resources used and these were tabulated and recorded at the Catholic Education Office’s Melbourne quarters. The move from that point to policy is a short step. The five steps are shown in the figure below.

figure 1

Linking testing to targeted teaching resources and policy

Did it work?

The criteria used to decide whether this was successful were as follows:

  1. Did the teachers learn how to monitor reading development?
  2. Did the teachers develop skills in targeted intervention?
  3. Did this intervention improve reading skills?
  4. Were the skills improved for all students regardless of their level of development initially?
  5. Was the innovation possible to scale up for other teachers and schools?
  6. Did professional learning teams work cohesively and share their successful strategies?

The answer to each of these has been a resounding ‘yes’.


Forster, M, Mendelovits, J & Masters, G (1994). Developmental Assessment Resource for Teachers, Australian Council for Educational Research, Camberwell, Victoria.

Mossensen, J, Hill, P and Masters, G (2003). Tests of Reading Comprehension (TORCH), Australian Council for Educational Research, Camberwell, Victoria.

Parkin, Chris and Parkin, Catherine (1999). PROBE Reading Assessment 1—Understanding Purpose, Use and Interpretation, Australian Council for Educational Research, Camberwell, Victoria.

Victorian Curriculum and Assessment Authority (2004). AIM test for Year 3, VCAA, Melbourne.

Vygotsky, LS (1978). Mind in Society: The development of higher psychological processes, edited by M Cole, V John-Steiner, S Scribner & E Souberman, Harvard University Press, Cambridge.

author picture Patrick Griffin is the associate dean of Innovation and Development in the Faculty of Education at the University of Melbourne and the director of the Assessment Research Centre.