Working in and on the system

In 1993, Myron Tribus proclaimed: The job of the manager has changed.

People work in a system. The job of a manager is to work on the system, to improve it, continuously, with their help.
Myron Tribus, 1993, “Quality Management in Education”, Journal for Quality and Participation, Jan–Feb, p. 5. Available at http://www.qla.com.au/Papers/5.

People work in a system

What did Tribus mean?

System

Firstly, we need to understand what he meant by system. Dr Deming defined a system to be:

A system is a network of interdependent components that work together to try to accomplish the aim of the system.
Edwards Deming, 1994, The New Economics: For industry, government and education, MIT, Massachusetts, p. 50.

Because Tribus is referring to managers, we understand him to be speaking of organisations. Organisations are systems comprising interdependent components working together towards some aim. A school is a system. A classroom is a system. A school district or region is a system.

A way of thinking about systems, in this context, is to think about how all the elements work together, as a whole, to get things done. How do school policies, procedures, facilities, committees, teams, classrooms, parents, leaders, teachers and students, for example, all work together to achieve the purpose and vision for the school?

Manager

Secondly, we need to understand whom Tribus is referring to in saying the job of the manager has changed.

Management is the ability to organise resources and coordinate the execution of tasks necessary to reach a goal in a timely and cost effective manner.
Kovacs and King, 2015, Improving Learning: A how-to guide for school improvement, QLA, Canberra, p387

Managers therefore are those seeking to reach goals, by working with tasks, resources, systems and processes. Under this definition, it’s hard to identify individuals who are not managers. Everybody in a school is organising resources and coordinating tasks to achieve goals, even students! For this conversation, however, let us limit our discussion to adults. Principals, teachers and support staff are all working with their colleagues and students to achieve the goals of the school and classroom.

Working in and on

Thirdly, Tribus makes the distinction between working in the system and working on the system.

Working in the system is doing the daily work of the system.

For a teacher, this usually means managing the daily routines of learning and teaching in the classroom: planning, programming, instruction, assessment, reporting and so on. For school leaders this includes: meeting with parents, providing support to school staff, attending meetings, managing the budget, responding to emails and phone calls, and so on. This is all the daily work – working in the system.

Working on the system is improvement work.

Working on the system comprises two types of activities: improvement projects and innovation projects. Both involve making changes to the existing system.

Improvement projects focus on making the existing system more efficient and/or effective.

This is achieved by improving how the elements of the system work together, usually by making changes to the processes and methods by which the work is done. Refining the enrolment or reporting process in a school would be examples of improvement projects. Improvement projects build on existing approaches to make the existing system work better.

Innovation projects are about creation of new systems, processes, products and services by the organisation.

In a school context, innovation projects are about new technologies, new programs and system reforms. Replacing parent-teacher interviews with student-led conferences would be an example of an innovation project. Innovation projects are about new approaches that prepare or position the organisation for the future.

Given this, Tribus is telling us that all mangers within an organisation have an obligation to contribute to improvement efforts. But there is a subtle twist in the last three words of his proclamation: with their help.

…with their help

Finally, Tribus is explicit that managers should not unilaterally impose changes upon those working in the system. All managers need to be involved in projects that work on the system, but these projects need to engage those working within the system. After all, it is those doing the daily work of the system that know most about how it is done and could be improved.

Teachers work in a system

Students know best the barriers to their learning; teachers know best what gets in the way of their teaching.

Students learn in a system

Within a school context, Tribus is saying that all adults need to be engaged working on the system to bring about improvement. They need to be participating in improvement and innovation projects, as project leaders in their own areas or team members with others’ projects. Students also need play an active role, contributing to improving their school and classroom.

 

Read more about school improvement in Improving Learning: A how-to guide for school improvement.

Watch a video of high school students in South Australia working on the system.

Watch a video of a year 2 class from Victoria working together to improve the classroom system.

 

Five Whys – Identifying root causes and motivation

The five whys tool was developed within the Toyota Motor Corporation as a means to identify the underlying causes of problems. When root causes are identified and addressed, the problem can be fixed and stay fixed.

Five whys can also be used to explore personal motivations.

The process is very simple: the issue under investigation is identified and noted. Examples: “Why are students disengaged from their learning”, “Why are we proposing to hold the meeting?”,  “Why are enrolments dropping?”, “Why do we come to school?”, “Why are teachers not listening?”.

“Why?” is then asked five times (the number of repetitions is not immutable, but in most cases five repetitions have been found to be sufficient).

In this example (Figure 1), year 8 students consider why they study mathematics.

Five Whys: Why do we study mathematics?
Figure 1. Five Whys: Why do we study mathematics?

Some years ago a teacher from a secondary school in Victoria told us the following story.

A class was constantly disrupted by the inappropriate behaviour of a student. Instead of responding in the usual manner by removing the child from the classroom, the teacher took the student to one side and applied the five whys tool to investigate the cause of the behaviour.

The student revealed that he found it difficult to make friends with others in the classroom, and that the behaviour was a means of getting attention and connecting with others.

The teacher worked to help the student learn strategies to develop relationships with others. This was a far more productive and longlasting solution than would have been achieved by reacting to the symptom and removing the student from the classroom.

 

Watch a video clip of a year 2 student explore why they come to school.

Watch a video clip of a year 4 student exploring the reasons her class comes to school.

Watch a case study video from a year 8 english class that includes the use of Five Whys to explore ‘Why do we study English’?

Purchase Tool Time for Education, which provides details of many improvement tools for schools and classrooms.

Read more about the quality improvement approach in our book IMPROVING LEARNING: A how-to guide for school improvement. 

Learning like a guided walk

I recently had the pleasure of completing a guided walk along the Milford Track – one of the Great Walks in New Zealand. The track passes through some of the most beautiful and pristine wilderness in the world.

During the walk, I was reflecting upon the characteristics of the guided walk that made it so pleasurable. Here are my reflections…

A clear path

The 33.5 miles of track from Glade Wharf to Sandfly Point is clearly laid out and very well maintained. Throughout the walk it was crystal clear where we were meant to go; if we stuck to the track there was little chance of getting lost.

With signposts

The track is clearly and comprehensively sign-posted. Every mile there is a numbered milepost indicating progress.

The track is clear, well maintained and there are regular mile posts.
The track is clear, well maintained and there are regular mileposts.

Periodically there are signs indicating distances or estimated times to key landmarks along the route. These signs, along with the mileposts, enabled each of us to track progress and monitor the pace of our walk.

Regular sing-posts enabled us to monitor our progress
Regular sing-posts enabled us to monitor our progress

Other signs warn of potential hazards ahead, including areas of possible flooding or avalanche.

Potential hazards are sign-posted
Potential hazards are sign-posted

Taken together, these signposts ensured we knew where we were, how far we had come and still had to go, points of interest, and areas where extra care might be required.

Walking at our own pace

We were encouraged to walk the track at our own pace and to take time to explore the locations we found interesting.

We took time to explore locations of interest to us, this being the Clinton River West Arm
We took time to explore locations of interest to us, this being the Clinton River West Arm

We were also encouraged to explore some of the side tracks that had particular points of interest. This was not compulsory. The side trip to Sutherland Falls, the highest falls in New Zealand, was truly remarkable.

Sutherland Falls, the highest in New Zealand. The water falls 580m.
The base of Sutherland Falls, the highest waterfalls in New Zealand. The water falls 580m.

Walking alone, or with others

In all there were about forty of us completing this walk together.

At times I walked alone. I like to do so; it gives me time to think. There were several occasions where it felt like I was the only person on the track. I could see no-one behind or ahead of me, and I felt I had the place to myself.

At times I felt I had the track all to myself
At times I felt I had the track all to myself

At other times I walked and chatted with my niece, Helen, who had invited me to do the walk with her.

Occasionally, I walked and chatted with small groups of others, some of whom had travelled across the globe to walk this track.

Everyone was free to choose with whom they walked.

A team of professional guides

A team of four guides accompanied us on the walk. They worked extremely well as a team. I was particularly impressed with the way the acknowledged and drew upon their individual strengths while working together to build their individual and collective capability.

Getting to know us

Each of the guides was friendly and welcoming. They each took time to speak with each of us and get to know a bit about us. They genuinely cared about each walker and were keen to ensure everyone had the best experience possible while under their guidance.

As the walk progressed, they learned about our walking style, preferences and limitations. Which of us were the quick walkers, guaranteed to reach each milestone first? Which of us were likely to find parts of the walk particularly challenging?

Mackinnon Memorial at Mackinnon Pass.
Mackinnon Memorial at Mackinnon Pass. The climb up and down the pass was challenging for most of us.

Through getting to know us, the guides were able to plan and execute personalised support, where it was required.

Knowing the track

The guides know the track intimately. Collectively they had walked the track many hundreds of times.

The guides highlighted points of interest and significance along the way. They proved very knowledgeable about the flora and fauna, and took the trouble to point out and help us interpret that we were seeing. We were encouraged to be inquisitive and draw upon their knowledge and experience.

Our guides discuss implications of the weather forecast
Two of our guides discuss implications of the latest weather forecast

They also knew how we might respond to the track. They know where the going is easy. They know where it’s most demanding. They know where people may experience difficulty. They also know the hazards and have strategies to minimise the associated risks.

Helping us be prepared

Each evening one of the guides briefed us on the outlook for the following day. The briefing informed us of the terrain ahead, distances involved, weather forecast, points of interest and any potential areas requiring particular care. This enabled us to plan ahead and be prepared to meet the challenges that lay before us.

The briefings also celebrated our achievements that day.

Briefings each evening celebrated our daily achievements and prepared us for the challenges of the following day.
Briefings each evening celebrated our daily achievements and prepared us for the challenges of the following day.

Providing support, as required

At all times there was a guide at the front of the group. This guide checked the path was clear of hazards.

There was also a guide bringing up the rear, ensuring nobody was left behind. This guide provided encouragement and practical support to those walkers finding the terrain a challenge.

The other two guides walked between, within the group. When we encountered a hazard along the track, there was always at least one guide there to help us through safely. This occurred on three occasions: the first when the track was submerged in flood waters and twice where the track had been obliterated by avalanches.

Guides were always on hand to help us though hazardous sections of track, in this case the site of a recent avalanche.
Guides were always on hand to help us though hazardous sections of track, in this case the site of a recent avalanche.

Celebrating Achievement

Having walked more than 33 miles over four days, we arrived at our destination, Milford Sound. Our final briefing was more of a celebration, each of us receiving a certificate during a simple ceremony, then proceeding to enjoy a meal together.

The following morning, we were treated to a brief cruise through the  sound before we each set off on the next stages of our respective journeys.

Dawn on Mitre Peak, Milford Sound
Dawn on Mitre Peak, Milford Sound

Learning can be like this guided walk

Schooling can be like this guided walk.

A clear path

The curriculum provides the learning path. Tools such as the Capacity Matrix and Gantt Chart put curriculum in the hands of the learners and provide signposts to support learners to plan and monitoring. Areas where special care may be required can also be highlighted.

Learning at their own pace

Once the path is clear, learners can be encouraged to progress at their own pace.

Learners can also take time to explore areas of particular interest to them, adding these to their capacity matrix and recording details of their learning.

Learning alone, or with others

Students can choose when they prefer to work alone, and when they may wish to work with others. Teams and groupings are by choice, not direction.

A team of teachers

Teachers work together as a team: acknowledging each others’ strengths and working to build their individual and collective capability. They are collectively responsible for the safety and progress of the learners.

Teachers take time to get to know the learners under their guidance: the learners’ aspirations, preferences and limitations.

Teachers know the curriculum intimately. They know where it is straightforward and where many students have difficulty. They encourage curiosity, enquiry and exploration.

Teachers  equip learners with skills and tools to plan and be prepared to make the most of the learning opportunities.

Teachers provide personalised support, helping everyone who requires assistance through all sections of the track. They pay particular attention to supporting learners through sections of curriculum that most people find challenging.

Celebrating achievement

Students and teachers acknowledge and celebrate achievements along the way and in ways that are meaningful to everyone.

 

Read more about Capacity Matrices.

Watch a video showing how year 7 students learn in this way.

Watch a video showing year 10 students learning in this way.

Purchase our book, IMPROVING LEARNING: A how-to guide for school improvement, and read more.

PE, PDSA and Student Voice

We have previously discussed the power of the Plan-Do-Study-Act (PDSA) cycle in bringing about collaborative, sustainable improvement. We have also emphasised the importance of allowing students to play a key role – giving students a ‘real voice’ – in improving their school and classroom. In this blog, we share another example. This time, how the PDSA process was used by a teacher and students to improve learning and engagement in their physical education (PE) classroom. (You can also view this as a QLA case study video.)

Chris, PE Teacher
Teacher, Chris, with her PE class PDSA storyboard

Chris is a leading specialist teacher at a Victorian primary school. She observed the school’s Year 6 students becoming increasingly disengaged during their weekly PE lessons. PE teachers were stressed and student behaviour was worsening. No one was enjoying PE!

Chris decided it was time to set students and teachers to work to improve PE by applying the PDSA cycle.

As we have seen previously:

PDSA is a highly effective improvement approach, based upon a cycle of theory, prediction, observation, and reflection.

It involves applying a structured process to achieve sustainable improvement.

A nine step PDSA process
A nine step PDSA process

This includes:

  • defining the opportunity for improvement by agreeing the purpose and establishing a shared vision of excellence
  • focusing improvement efforts on a system or process (rather than blaming individuals)
  • identifying root causes not symptoms
  • developing and applying a theory for improvement
  • reflecting on the outcomes achieved to agree a new ‘best method’ or further improvement needed.

Here’s how…

Chris applied the PDSA process with her students. They documented a comprehensive storyboard to capture their agreements, the data collected, and to reflect their progress in applying the PDSA process.

Here’s what they did:

  1. Students and teachers discussed to agree the opportunity for improvement – to improve their PE classes.
  2. They studied the current situation – what did PE currently look like, feel like, and what was going on? They agreed: students were disengaged, disinterested and not putting in their best efforts; some students were disrupting the class, preventing others from enjoying PE; and teachers were frustrated.

    CSV014f RHPS PDSA PE.00_00_54_14.Still001
    PDSA storyboard extract: brainstorm of the current situation in PE
  3. They collected data to measure the extent of the dissatisfaction with PE. A correlation chart was used to measure student
    CSV014f RHPS PDSA PE.00_01_30_08.Still002
    PDSA storyboard extract: collecting data using a correlation chart – how much are students enjoying and learning in PE?

    perception. The data revealed low levels of student enjoyment
    (fun) and learning in the PE classroom.

  4. Students then brainstormed and themed the drivers and barriers associated with motivation and participation in their PE classroom. They used sticky notes and an affinity diagram to facilitate this. The major barriers they identified were: ‘inappropriate behaviour’, ‘boring classes’, ‘lack of student choice’, ‘the weather’ and ‘wasting time’.

    CSV014f RHPS PDSA PE.00_01_43_07.Still003
    PDSA storyboard extract: affinity diagram of the barriers to student motivation and participation in PE
  5. These barriers were analysed to agree the root causes using an interrelationship digraph. (They knew that by working on the root causes of their problem that they would realise the greatest return on their improvement efforts.) For the PE students this revealed ‘lack of choice’ as the major or root cause. A lack of choice by students in their PE lessons was seen as a major barrier to participation and motivation. It was impacting upon the other causes and driving the observed problems with behaviour and performance in their classroom.

    CSV014f RHPS PDSA PE.00_02_21_16.Still004
    PDSA storyboard extract: interrelationship digraph analysing the root causes of a lack of student motivation and participation in PE
  6. A bone diagram was used with students to further explore the current situation, and to agree a vision of excellence for PE – what they wanted PE to be like. The resulting student vision showed students believed: student choice, a clear purpose and process for each session, appropriate behaviour, more minor games, a mix of skills, effective use of time, student’s understanding what was expected, and knowing whether they were improving; were the key characteristics students believed were essential for a great PE lesson.

    CSV014f RHPS PDSA PE.00_02_50_14.Still005
    PDSA storyboard extract: bone diagram agreeing a vision of excellence for PE
  7. They brainstormed possible solutions which included: ‘kids teaching kids’,  students ‘choosing activities’ and ‘writing their own report’,   agreeing a student ‘code of behaviour’, clarifying expectations (quality criteria: ‘know what a good throw, jump looks like’), and students ‘making up games’.

    CSV014f RHPS PDSA PE.00_03_29_13.Still007
    PDSA storyboard extract: brainstorm of possible solutions to improve PE
  8. These solutions helped them to develop a ‘theory for improvement’ comprising the following key strategies:
  • multi-voting to agree the focus of each lesson
  • agreeing the lesson format – flowcharting the teaching and learning process
  • appointing student skill coaches and documenting skill cards to help the coaches do their job
  • students undertaking peer evaluation together with their teacher/coach. They developed capacity matrices for key areas of learning to help them to do this. They also documented quality criteria describing how to execute essential skills with a high degree of excellence (e.g. how to do an overhand throw). Students used the capacity matrices and quality criteria as the basis for reflection and evaluating their progress in PE
  • agreeing a code of behaviour
  • everyone reflecting and giving feedback after each lesson.
CSV014f RHPS PDSA PE.00_03_46_16.Still008
PDSA storyboard extract: agreed strategies to improve PE
CSV014f RHPS PDSA PE.00_04_32_02.Still010
PE – capacity matrix for gymnastics
CSV014f RHPS PDSA PE.00_04_48_15.Still012
PE – quality criteria for an overhand throw

The outcome?

The PE classes applied the agreed strategies and new processes, and a few weeks later reflected on the effectiveness of the improvements they had made (the ‘study’ phase of the PDSA
process).

  • Behaviour and engagement improved. Students were motivated and learning
  • Students ‘owned’ and were running the PE lessons with minimal guidance from PE teachers! They were responsible for their learning
  • PE lessons had a productive ’buzz’! Students were excited. Teachers were happy.

The processes they had developed together were adopted as the new way for PE lessons.

Chris described the PDSA based collaborative process as having an amazing impact.

Applying the PDSA process, working ‘with the kids’ and not  ‘doing to the kids’,  brought about significant positive change to PE lessons – improving the way teachers were teaching and students were learning – to great effect!

Learn more…

Download the detailed 9-step PDSA poster.

Purchase IMPROVING LEARNING: A how-to guide for schools, to learn more about the quality improvement philosophy and methods.

Purchase our learning and improvement guide: PDSA Improvement Cycle.

Watch a video of PDSA applied to year one writing.

Watch a video of PDSA applied within a multi-age primary classroom.

Watch a video about student teams applying PDSA to school improvement.

Understanding Variation 4 – Stop Tampering!

This is the final in a series of four blog posts to introduce the underpinning concepts related to variation in systems. In the first post we discussed common cause and special cause variation. The second explored the concept of system stability. The third explained system capability. This final post discusses tampering – making changes to systems without understanding variation. Tampering makes things worse! This is an edited extract from our book, Improving Learning.

Stop tampering

Let us begin with a definition of tampering.

Tampering: making changes to a system in the absence of an understanding of the nature and impact of variation affecting the system

The most common forms of tampering are:

  1. overreacting to evidence of special cause variation
  2. overreacting to individual data points that are subject only to common cause variation (usually because these data are deemed to be unacceptable)
  3. chopping the tails of the distribution (working on the individuals at the extreme ends of the distribution without addressing the system itself)
  4. failing to address root causes.

Tampering with a system will not lead to improvement.

Let us look more closely at each of these forms of tampering and their impact.

Tampering by overreacting to special cause variation

Consider the true story of the young teacher who observed a student in the class struggling under the high expectations of her parents. The teacher thought that the student’s parents were placing too much pressure on the child to achieve high grades, which the teacher believed to be beyond the student. The young and inexperienced teacher wrote a letter to the parents suggesting they lower their expectations and lessen the pressure on their daughter. Receipt of this letter did not please the parents, who demanded to see the school Principal. Following this event, the Principal required all correspondence from teachers to parents to come via her office. Faculty heads within the school, not wanting to have teachers in their faculties make the same mistake, required that correspondence come through them before going to the Principal.

The end result was a more cumbersome communication process for everyone, which required more work from more people and introduced additional delays. The principal overreacted to a special cause. A more appropriate response would have been for the principal to work one-on-one with the young teacher to help them learn from the situation.

Making changes to a system in response to an isolated event is nearly always tampering.

A more mundane example of this type of tampering is when a single person reports that they are cold and the thermostat in the room is changed to increase the temperature. This action usually results in others becoming hot and another adjustment being made. If any individual in the room can make changes to the thermostat setting, the temperature will fluctuate wildly, variation will be increased and more people will become uncomfortable, either too hot or too cold.

Most people can think of other examples where systems or processes have been changed inappropriately in response to isolated cases.

The appropriate response to evidence of special cause variation is to seek to understand the specific causes at play and have situations dealt with on a case-by-case basis, without necessarily changing the system.

Occasionally, investigation of a special cause may reveal a breakthrough. The breakthrough may be so significant that changes to the system are called for in order to capitalise on the possibilities. This is, however, rare and easily identified when it is the case.

Tampering by overreacting to individual data points

Another common form of tampering comes from overreacting to individual data points. Such tampering is very common and very costly.

Figure 1 presents a dot plot of mean Year 3 school results, measured across five key learning areas by NAPLAN in 2009. These results are from an Australian jurisdiction and include government-run schools and nongovernment schools. For the purpose of the argument that follows, these data are representative of results from any set of schools, at any level, anywhere.

Figure 1 Dot plot of school mean scores
Figure 1 Dot plot of school mean scores

The first thing to notice is that there is variation in the school mean scores. (Normal probability plots suggest the data appear to be normally distributed, as one would expect.) The system is stable and is not subject to special causes (outliers).

The policy response to variation such as this is frequently a form of tampering. Underperforming schools are identified at the lower ends of the distribution and are subjected to expectations of improvement, with punishments and rewards attached.

This response fails to take into account the fact that data points within the natural limits of variation are only subject to common cause variation.

To single out individual schools (classes, students, principals or teachers) fails to address the common causes and fails to improve the system in any way.

When this approach is extended to all low performing elements, it becomes an even more systematic problem: attempting to chop the tail of the distribution.

Tampering by chopping the tails of the distribution

Working on the individuals performing most poorly in a system is sometimes known as trying to chop the tail of the distribution. This is also tampering.

There are three main reasons why this is bad policy, all of which have to do with not understanding the nature and impact of variation within the system.

Firstly, it is not uncommon to base interventions on mean scores. Yet it is well known within the education community that there is much greater variation within schools than there is between schools. Similarly, there is much greater variation within classes than between classes within the same school. Averages mask variation.

Consider two schools. School A (Figure 2) is performing at the lower end of the distribution for reading scores — with a mean reading score of approximately 390. School B (Figure 3) has a mean reading score approximately 30 points higher.

Figure 2. Histogram of Year 3 student reading scores (School A)
Figure 2. Histogram of Year 3 student reading scores (School A)
Figure 3. Histogram of Year 3 student reading scores (School B)
Figure 3. Histogram of Year 3 student reading scores (School B)

The proportion of students in each school that is performing below any defined acceptable level is fairly similar. School A, for example, has 12 students with results below 350. School B has seven. In some systems, resources are allocated based on mean scores. Those with mean scores beyond a defined threshold are entitled to resources not available to those with mean scores within certain limits. If School A and School B were in such a system and the resourcing threshold was set at 400, for example, School B could be denied resources made available to School A, simply because its mean score is above some defined cut-off point.

Where schools or classes are identified to be in need of intervention based on mean scores, the nature and impact of the variation behind these mean scores is masked and ignored. If the 12 students in School A receive support, why is it denied to those equally deserving seven students in School B?

Secondly, the distribution of these results fails to show evidence of special cause variation. The variation that is observed among the performance of these schools is caused by a myriad of common causes that affect all schools in the system.

Singling out underperforming schools for special treatment each year does nothing to address the causes common to every school in the system, and fails to improve the system as a whole.

Even if the intervention is successful for the selected schools, the common causal system will ensure that, in time, the distribution is restored, with schools once again occupying similar places at the lower end of the curve. The system will not be improved by this approach.

Thirdly, this approach consumes scarce resources that could be used to examine the cause and effect relationships operating within the system as a whole and taking action to improve performance of the system as a whole.

In education, working on the individuals performing most poorly in a system is a disturbingly common approach to improvement. It never works. A near identical strategy is used within classes to identify students who require remediation. The “bottom” — underachieving — kids are given a special program; they are singled out. Sometimes the “top” — gifted and talented — kids are also singled out for an extension program.

This is not so say that we should not intervene when a school is strugglingor when a student is falling behind. Nor are we suggesting that students and schools who are progressing well should not be challenged to achieve even more. It is appropriate to provide this support and extension to those who need it. The problem is that doing so does not improve the system. Such actions, when they become as entrenched as they currently are, are merely part of the current system.

It should be noted that focussing upon poor performers also shifts the blame away from those responsible for the system as a whole and onto the poor performers.

The mantra becomes one of “if only we could fix these schools/students/families”. The responsibility lies not with the poor performers, but with those responsible for managing the system: senior leaders and administrators. It is a convenient, but costly diversion to shift the blame in this way.

If targeting the tails of the distribution is the primary strategy for improvement, it is tampering and it will fail. Unless action is taken to improve the system as a whole, the data will be the same again next year, only the names will have changed. Over time, targeting the tails of the distribution also increases the variation in the system.

This sort of tampering is not restricted to schools and school systems. It is very common, and equally ineffective, in corporate and government organisations. It is quite common that the top performers are rewarded with large bonuses, while poor performers are identified and fired or transferred. Sales teams compete against each other for reward and to avoid humiliation. Such approaches do not improve the system; they tamper with it.

Tampering by failing to address root causes

Tampering commonly occurs when systems or processes are changed in response to a common cause of variation that is not a root cause or dominant driver of performance.

People tamper with a system when they make changes to it in response to symptoms rather than causes.

It is easy to miss root causes by failing to ask those who really know. Who can identify the biggest barriers to learning in a classroom? The students.

Teachers can be quick to identify student work ethic as a problem in high schools. It is a rare teacher who identifies boring and meaningless lessons as a possible cause. Work ethic is a symptom, not a cause. It is not productive to tackle the issue of work ethic directly. One must find the causes of good and poor work ethics and address these in order to bring about a change in behaviour.

There has been a concerted effort in recent years in Australia to decrease class sizes, particularly in primary schools. Teachers are pleased because it appears to reduce their work load and provides more time to attend to each student. Students like it because they may receive more teacher attention. Parents are pleased because they expect their child to receive more individualised attention. Unions are happy because it means more teachers and therefore more members. Politicians back it because parents, teachers and unions love it. Unfortunately, the evidence indicates that these changes in class size have very little impact on student learning. (See John Hattie, 2009, Visible Learning, Routledge, London and New York.) This policy is an example of tampering on a grand and expensive scale. Class size is not a root cause of performance in student learning.

Managers need to study the cause and effect relationships within their system and be confident that they are addressing the true root causes. Symptoms are not causes.

Every time changes are made to a system in an absence of understanding the cause and effect relationships affecting that system, it is tampering.

Tampering will not improve the system; it has the opposite effect.

Read about four types of measures, and why you need them.

Read about common cause and special cause variation.

Read about system stability.

Read about system capability.

Read more in our comprehensive resource: IMPROVING LEARNING – A how-to guide for school improvement.

Purchase our learning and improvement guide Using data to improve.

 

Understanding Variation 3 – System Capability

This is the third in a series of four blog posts to introduce the underpinning concepts related to variation in systems. In the first post we discussed common cause and special cause variation. The second explored the concept of system stability. In this post, we explore whether a system is capable of consistently meeting expectations. This is an edited extract from our book, Improving Learning.

System Capability

Just because a system is stable does not mean that it is producing satisfactory results. For example, if a school demonstrates average growth in spelling of about 70 points from Years 3 to 5, is this acceptable? Should parents be satisfied with school scores that range from 350 to 500? These are questions of system capability.

Capability relates to the degree to which a system consistently delivers results that are acceptable — within specification, and thus within acceptable limits of variation.

Capability: the degree to which a system consistently delivers results that are within acceptable limits of variation.

Note that stability relates to the natural variation that is exhibited by a system, capability relates to the acceptable limits of variation for a system. Stability is defined by system performance. Capability is defined by stakeholder needs and expectations.

It is not uncommon to find systems that are both stable and incapable; systems that consistently and predictably produce results that are beyond acceptable limits of variation and are therefore unsatisfactory. No doubt, you can think of many examples.

Cries for school systems to “raise the bar” or “close the gap” are evidence that stakeholders believe school systems to be incapable (in this statistical sense) because the results they are producing are not within acceptable limits of variation. However, the results are totally predictable, the system is stable, but the results are unsatisfactory; the system is incapable.

In Australia, NAPLAN defines standards for student performance. National minimum standards are defined to reflect a “basic level of knowledge and understanding needed to function in that year level”.

Proficiency standards, which are set higher than the national minimum standards, “refer to what is expected of a student at the year level”. Depending on the year level and learning area, between two per cent and 14 per cent of students fail to reach national minimum standards. By definition, then, the Australian education system is incapable. It fails to consistently produce performance that is within acceptable limits of variation, because a known proportion of students fails to meet minimum standards, let alone perform at or better than the expected proficiency.

Figure 1 shows the spelling results for 161 Year 9 students at an Australian high school, as measured by NAPLAN. These results fall between the upper and lower control limits, which have been found to be at 297 and 794 respectively. Careful analysis failed to reveal evidence of special cause variation. This system appears to be stable. The national minimum standard for Year 9 students is 478. In this set of data, there are 33 students performing below this standard. Thus we can conclude that the system which produced these spelling results is stable but incapable.

Histogram of Year 9 student NAPLAN scores in spelling, indicating a system that is stable but incapable.
Figure 1 Histogram of Year 9 student NAPLAN scores in spelling, indicating a system that is stable but incapable.

 

Taking effective action: Responding appropriately to system variation.

With an understanding of the concepts of common cause and special cause variation, responding to system data becomes more effective. The flowchart in Figure 2 summarises an appropriate response to system data.

Flowchart: responding appropriately to system variation
Figure 2. Flowchart: responding appropriately to system variation

In the next post, we describe what can happen when we don’t respond appropriately to variation – tampering! Making things worse.

Read about four types of measures, and why you need them.

Read about common cause and special cause variation.

Read about system stability.

Read more in our comprehensive resource: IMPROVING LEARNING – A how-to guide for school improvement.

Purchase our learning and improvement guide Using data to improve.

Understanding Variation 2 – System stability

This is the second in a series of four blog  posts to introduce the underpinning concepts related to variation in systems. In the first post we discussed common cause and special cause variation. In this post, we explain that a stable system is predictable. This is an edited extract from our book, Improving Learning.

System stability

System stability relates to the degree to which the performance of any system is predictable — that the next data point will fall randomly within the natural limits of variation.

A formal definition can provide a useful starting point for exploring this important concept.

A system is said to be stable when the observations fall randomly within the natural limits of variation for that system and conform to a defined distribution, frequently a normal distribution.

All systems exhibit variation in all four types of measures: results, perceptions, processes, and inputs.

Variation within groups

Consider, for example, the student results in Figure 1, which show the reading scores for 103 students attending an Australian high school. Each student was tested as part of NAPLAN when they were in Year 7.

Figure 1 Histogram of Year 7 individual student NAPLAN scores from an Australian high school.
Figure 1 Histogram of Year 7 individual student NAPLAN reading scores from an Australian high school.

The histogram shows the variation in student performance, from which we can see:

  • the mean score is approximately 510 points; and
  • the data seem to be roughly normally distributed, as there is a stronger cluster of scores around the mean score, and the curve appears roughly bell shaped.

A stable system produces predictable results within the natural limits of variation for that system.

If we use the histogram to study the variation in student NAPLAN results, we can assume that, if there were additional students in that group, their results would very likely fall within the distribution shown.

Furthermore, if nothing is done to change a stable system, it is rational to predict that future NAPLAN reading performances will be similar, both in the mean or average performance and in the range of variation evident in the results.

Figure 2 Histogram of Year 7 individual student NAPLAN scores from an Australian high school.
Figure 2 Histogram of Year 7 individual student NAPLAN grammar scores from an Australian high school.

The histogram in Figure 2 shows the grammar scores of the same group of Year 7 students. Here the mean score is about 500 points.

Notice here the presence of a single student with a score of approximately 100. This data point appears to be an outlier: it is noticeably different to the other data points.

One could reasonably assume that this data point represents something out of the ordinary, that the causes that led to this result are different to those experienced by the remainder of the system.

Given that this data point is so different to the others, investigation is called for, and is likely to reveal a specific reason, an assignable cause. Where specific causes can be identified, they are called special causes or assignable causes. In this instance, investigation revealed that this student had scored about 200 points below expectation due to illness on the day of the test.

These examples of system stability, within groups, relate to measures of students’ learning at a particular point in time.

Variation between groups

The variation that is evident between groups is often of great interest. For example, we may be interested in variation between classes of the same grade or year, or between schools in different districts or states.

In these instances, the focus is no longer on variation within a set of data points, but upon differences in variation that is evident between groups (multiple sets) of data points.

Consider, for example, the sets of histograms presented in Figures 3 and 4. Both come from the same primary school, and both represent the growth in students’ scores in key learning areas, as measured by NAPLAN, over the two-year period from Year 3 to Year 5.

Figure 3. Histograms of student growth literacy and numeracy years 3 to 5, 2008 - 2010, NAPLAN individual student scores from an Australian primary school.
Figure 3. Histograms of student growth literacy and numeracy years 3 to 5, 2008 – 2010, NAPLAN individual student scores from an Australian primary school.
Figure 4. Histograms of student growth literacy and numeracy years 3 to 5, 2009 - 2011, NAPLAN individual student scores from an Australian primary school.
Figure 4. Histograms of student growth literacy and numeracy years 3 to 5, 2009 – 2011, NAPLAN individual student scores from an Australian primary school.

The first group of students (Figure 3) was initially tested in 2008, when the students were in Year 3. This same group of students was tested again in 2010, when they were in Year 5. The histograms show the difference, or growth, in scores over that two-year period.

The second group of students (Figure 4) is a year younger. These students were tested in 2009, when they were in Year 3, and again in 2011, when they were in Year 5. (Due to differences in the testing of writing between 2009 and 2011, growth in this area was not evaluated for the second group).

Has one group of students performed better than the other? These data look very similar. Analysis fails to show any significant difference between these two groups of students in either the mean score or variation for each of the four learning areas.

With two different groups of students, this school produced essentially the same results in terms of student growth over the two two-year periods. The data are practically the same for each group, only the names of the students are different.

Consider the scores for grammar and punctuation, for example. For both groups of students, the system produced a mean score of about 95 and a range from approximately -70 to 250. It appears the system produced consistent results with a mean growth of approximately 95 points and natural limits of variation plus or minus approximately 160. The story is similar for the other three learning areas.

We can reasonably predict that, unless something changes significantly at this school, the next group of students will again produce almost identical results.

The system is thus said to be stable — the points fall predictably between the natural limits of variation for the system.

Variation over time

Outliers, trends and unusual observations in time-series data can indicate the presence of special cause variation. Where these exist, the system is not stable.

So far we have used the histogram to help us to study variation in a system. We can also study system variation by plotting data as a time series using a run chart (line graph) or control chart as in Figure 5. Here the class total of correct spelling words per week is plotted over weekly intervals.

Figure 5. Control Chart of weekly class spelling total.
Figure 5. Control Chart of weekly class spelling total.

Notice the dips in the number of correct words at weeks nine and twenty-nine. One could reasonably seek explanations and learn that it was, for example, the week of the school camp, or an outbreak of influenza that lead to student absences resulting in lower numbers of correct words. These would be examples of special cause variation.

Instances of special cause variation in time series data can be revealed by patterns or trends in the data, including:

  • a series of consecutive data points that sequentially improve or deteriorate; and
  • an uncommonly high number of data points above or below the average.

If there is an unexplained pattern in the data, this is evidence of special cause variation and investigation is justified. Such a system is said to be unstable.

If special cause variation is absent, future performance can be predicted with confidence. This performance will fall within the natural limits of variation for that process. If special cause variation is absent, or the presence of any special causes is explained, and system performance can be confidently predicted, the system is said to be stable.

Where unusual data points or trends have not been explained, any predictions of future performance will be less reliable. In such cases, the system is said to be unstable. Confident prediction is not possible for an unstable system.

In the next post in this series, we explain the notion of system capability. These concepts, stability and capability, along with an understanding of common cause and special cause variation, discussed in the pervious post, are fundamental to preventing tampering with systems. Tampering is a common practice in school education systems (and elsewhere), and usually makes things worse!

 

Read about four types of measures, and why you need them.

Read about common cause and special cause variation.

Read more in our comprehensive resource: IMPROVING LEARNING – A how-to guide for school improvement.

Purchase our learning and improvement guide Using data to improve.

 

Understanding Variation 1 – common and special cause variation

This is the first in a series of four blog  posts to introduce the underpinning concepts related to variation in systems. In this post we discuss common cause and special cause variation.

These concepts provide a foundation for understanding and responding to variation in systems. In particular these concepts are fundamental to understanding the notions of stability, capability and avoiding tampering; each of which will be discussed in subsequent posts.

We also discuss simple tools that allow us to ‘see the variation’ in systems and processes. Understanding and applying these concepts and tools helps us to respond appropriately to data to continually improve, rather than risk making things worse!

Variation is everywhere

Variation is evident in all systems. No two real things are identical.

Consider, for example, the standard AA size battery. AA size batteries are 50 mm long and 14 mm in diameter, as defined by international standards. They all look the same and are perfectly interchangeable. Yet each individual battery cannot be exactly 50 mm long and exactly 14 mm in diameter. Most people don’t care that one battery is 50.013 mm long and another is 49.957 mm long; both will fit perfectly well in their flashlight or remote control. To detect these differences – the variation – precise measuring equipment is required.

A factory will produce batteries with a length that has a calculable mean, an observable spread, and a clustering of lengths around the mean. All determined by the manufacturing process.

While two observed things are never identical, we can think of them as being identical when our measurement system is unable to detect difference, or when any differences are of no practical significance.

Sometimes variation is more evident. The average height of an Australian 13-year-old boy is approximately 156 cm. Very few 13-year old boys are precisely 156 cm tall, but nearly all will be within about three cm of this average height. This phenomenon is known as the natural limits of variation. In this case; a typical 13 year old boy’s height falls naturally within a range of heights centered at 156 cm and varying up to about three cm above and below this value.

All processes and systems exhibit natural variation. In both these examples, a battery’s dimensions and the height of a 13-year-old boy, the characteristic being measured is different from observation to observation. Yet, as a set of observations they conform to a defined distribution, in this case the normal distribution.

The factors that cause this variation, from observation to observation, come from the system. In the case of AA batteries, it is the system of manufacturing; variation in the height of 13-year-old boys comes from genetic, societal and environmental factors. In both cases, it is the system that produces natural variation.

In a similar manner, systems produce variation in perceptions and performance. Figure 1 shows the perceptions of teachers in a school regarding the degree of engagement of their students. The variation in perceptions is evident.

Consensogram of perceptions of student engagement.
Figure 1. Consensogram of perceptions of student engagement.

It is the system that produces natural variation. To understand this variation, it is necessary to understand the system. No examination of individual examples can explain the system.

Common cause variation

Variation observed in any system comes from diverse and multitudinous possible causes.

The fishbone diagram can be used to document the many possible causes of variation. The fishbone diagram in Figure 2, for example, lists possible causes of variation in student achievement.

Fishbone Possible Causes of Variation in Student Achievement
Figure 2: Fishbone Diagram of Possible Causes of Variation in Student Achievement

Each of the causes affects every student to a greater or lesser degree. Students respond to each cause in different ways, so the impact is different for each student. For example, some students may be sensitive to background noise while others are not. Some students may struggle to balance family responsibilities, work and school, while for others this not an issue. All students will be affected to some degree by their prior learning and their attitude towards the subject matter. The key point, however, is that every student may be affected to some degree by every cause. It is how all of the causes come together for each individual student that results in the variation in student achievement observed across the class. Causes that affect every observation, to greater or lesser degrees, are called common causes.

Common cause variation is the variation inherent in a system. It is always present. It is the net result of many influences, most of which are unknown.

In general, it is the combination of the common causes of variation coming together uniquely for each observation that results in the distribution in the set of data points. That is, the set of observations conform to a defined distribution. Not surprisingly, this distribution is frequently a normal distribution.

Figure 3 shows a histogram or frequency chart of the variation in year 7 students’ reading test scores from an Australian school, as measured by a national standard test. You can see the natural spread of variation in this measure of the students’ reading performance.

Histogram of Reading Results Year 7
Figure 3. Histogram of Reading Results Year 7

For any single data point — for example, a single student’s test result — it is not possible to identify any specific cause that led to the result achieved. Importantly, it is not worth trying to identify any such single cause.

The system of common causes determines the behaviour and performance of the system. These causes include the actions and interactions among the elements of the system, as well as features of the structure of the system and those of the containing systems.

Special cause variation

The other type of variation is special cause variation.

When a cause can be identified as having an outstanding and isolated effect  — such as a student being late to school on the morning of an assessment — this is called special cause variation or assignable cause variation. A specific reason can be assigned to the observed variation.

Special cause variation is variation that is unusual and unpredictable. It can be the result of a unique event or circumstance, which can be attributed to some knowable influence. It is also known as assignable cause variation.

Special causes of variation are identifiable events or situations that produce specific results that are out of the ordinary. These out of the ordinary results may be single points of data beyond the natural limits of variation of the system, or they may be observable patterns or trends in the data.

Figure 4 hows a histogram or frequency chart of the variation in year 7 students’ grammar test scores from the same Australian school, as measured by a national standard test. You can see the natural spread of variation in this measure of the students’ grammar achievement. You can also see one student’s results significantly below the vast majority of scores. That single observation suggests a special cause of variation and is worthy of investigation.

Histogram of Grammar Results Year 7
Figure 4. Histogram of Grammar Results Year 7

Where there is evidence of special cause variation in a set of data, it is always worth investigating. The impact of a special cause may be detrimental, in which case it may be appropriate to seek to prevent occurrence of this cause within the system. The impact of a special cause may also be beneficial, in which case it may be worth pursuing how this cause can be harnessed to improve system performance.

Special causes provide opportunities to learn. The lesson might be as mundane as “that batch of electrolyte was contaminated”, or it might be as exciting as the discovery of penicillin, or a new strategy for learning.

In summary:

Variation is evident in all observations – from physical dimensions to student behaviour and academic achievement. Most observed variation is due to common causes – those causes that affect every observation, to differing degrees. Sometime, there are specific and identifiable causes of variation – these are known as special causes.

These two key concepts – common and special cause variation – are fundamental to responding to system variation appropriately. An understanding of these concepts is critical to affecting demonstrable and sustainable improvement. They underpin an understanding of system stability, capability and tampering, which will each be discussed in future blog posts. Where these concepts are not understood, attempts to improve performance frequently make things worse.

Download a Fishbone Diagram template.

Read about Four types of measures, and why you need them.

Read more in our comprehensive resource, IMPROVING LEARNING: A how-to guide for school improvement.

Purchase our learning and improvement guide: Using data to improve.

What is your school’s learning theory?

What is your school’s theory of teaching and  learning?

Some schools waste time focusing their efforts on trying to control and manage the actions and behaviours of individuals. They would do better examining the underpinning theory, systems and processes driving the action and behaviour. Reflecting deeply on, and defining (making explicit), the beliefs upon which current approaches to learning and teaching are based, can lead to great focus, alignment and return on efforts to improve.

Fundamental to improving learning is to agree (define) the theory guiding our teaching and learning.

The following anthropological model adapted from the work of Martin Weisbord can help us understand why this is so. It describes a hierarchy of influences on organisational behaviour. The model is consistent with Deming’s teachings on how systems drive performance and behaviour, and the need to develop theory to drive improvement.

Weisbords Ladder
Weisbord’s anthropological model illustrating an organisational heirachy of theory driving action and behaviour

Weisbord’s model illustrates the relationship between beliefs, philosophy (theory), systems, processes, choices and action. An organisation’s systems and processes reflect and reinforce its values, beliefs and philosophy. These systems and structures dictate the processes and methods, and shape the dilemmas and choices faced by individuals of the organisation. The choices made by individuals, in turn, produce the actions and behaviours we observe.

Let’s look at an example to illustrate. Say we believe students are inherently lazy, that they have little desire to improve, and need to be motivated to learn. We will then develop systems and processes in our school and classrooms in an attempt to extrinsically motivate them. Our systems and processes will usually be based upon incentives and rewards, fear and punishment. If, however we believe we are born with an innate desire to learn and to better ourselves, and that the motivation to learn comes from within, then we will design very different systems of learning in our classrooms. These systems usually focus upon building ownership of learning, and working with students to identify and remove the barriers to their intrinsic motivation and learning.

Defining a theory and designing systems and processes can be a deliberate and thoughtful action or it can occur through natural evolution – the choice is ours.

We can make a conscious choice to define and make explicit our values and beliefs regarding teaching and learning.  An operational definition is used to achieve and document a shared understanding of the precise meaning of concept/s. Operational definitions provide clarity to groups of individuals for the purposes of discussion and action.

It follows that once we have defined our theory of teaching and learning, we can design structures, systems, processes and methods that are aligned to it and naturally promote the actions and behaviours we desire.

Of course, we draw upon evidence-based research to craft our theory. We can then work together over time testing, reinforming and reaffirming this theory, and improving systems and processes to produce the performance and behavioural outcomes we wish to see.

How to…

Our work with schools in defining a learning and teaching philosophy has typically followed the process summarised in the flowchart below. All staff are invited to be involved in agreeing the philosophy which takes place through one or more workshops.

Developing a Learning Theory Flowchart
Flowchart of a process to create a school learning theory

Step 1.  Agree a research or evidence-base to inform the philosophy

The first step is to agree and draw upon a research or evidence-base to inform the philosophy. Education systems in Australia have, over time, adopted different pedagogical models. Schools have adopted many different models, all purporting to reflect the latest research and providing the theory necessary to guide excellent teaching practice. The Quality Teaching model, the National School Improvement Tool, the e5 Instructional Model, and the International Baccalaureate are examples of pedagogical models currently in use. Explore the preferred model/s with all staff before defining your philosophy to agree which one or more resonate and align with the needs of your learning community. Of course, if there is a model that adequately describes the philosophy to teaching and learning that your school community wishes to adopt, the job is made easier. Job done – just agree to use it!

Step 2.  Brainstorm ideas

Something we tend to overlook is to recognise the ‘prior knowledge’ of our teachers. Every educator will have developed a theory – based upon their understanding and experience – as to the greatest influences on learning in their classroom. Ask staff also to reflect upon their own teaching and learning values and beliefs. We have found it helpful to express the learning and teaching philosophy as a set of (documented) principles.

To define the philosophy, ask staff to brainstorm their key learning and teaching beliefs, concepts and principles. This can be achieved by every staff member providing their input to the process by writing down their individual ideas as statements on sticky notes – one statement per sticky note.

Step 3.  Collate the ideas using an Affinity Diagram

The staff input can then be collated by creating an Affinity Diagram with the sticky notes. Headings are applied to the Affinity Diagram reflecting the agreed major themes (as in the figure below).

Learning Theory Affinity Diagram
Affinity Diagram – theming ideas for a learning theory

Step 4.  Agree theory statements

These themes can be documented as a set of agreed statements (principles). For example, the following are the principles of learning and teaching agreed to by Knox Primary School in Melbourne, Victoria.

Knox Park Primary School, Victoria Learning and Teaching Philosophy
Knox Park Primary School, Victoria Learning and Teaching Philosophy

Here is another example of an agreed learning and teaching philosophy. It is the Learning Model developed by the Leander Independent Schools District in Texas, USA.

LISD Learning Model
Leander Independent Schools District, Texas, USA Learning Model

The theory as a foundation for continual improvement

The school’s theory of learning and teaching, or principles, are then used as an ongoing reference to develop, review and continually improve consistency in policy and practice across the school. Each principle is subject to ongoing exploration through reflection and dialogue to develop deeper and shared understanding, and to inform the development of agreed learning systems and processes – the school’s pedagogical framework.

Naturally, the philosophy is dynamic. Like any theory or hypothesis, to be relevant and effective in an ongoing way, it will need to be regularly reviewed, reaffirmed or reinformed by further research and our experiences of applying it over time.

A final note

John Hattie’s research (Teachers Make a Difference: What is the research evidence? Australian Council for Educational Research, October 2003) revealed greater variation between the classrooms in an Australian school than between Australian schools. Defining the theory that will guide teaching and learning across your school is a way to reduce this variation.

To learn more…

Purchase a copy of Improving Learning: a how to guide to school improvement.

Plan-Do-Study-Act

Creating a theory for improvement

Continual improvement is derived, in large measure, from the efforts of individuals and teams working together to bring about improvement. For example, many schools have introduced professional learning teams (PLTs). PLTs usually involve teams of teachers working together on agreed improvement projects aimed at improving classroom learning and teaching practice.

Sadly, ‘how’ we work on these improvement efforts is frequently left to chance. The result is valuable time and effort wasted as sub-optimal solutions are derived.  So how can we make the most of these rich opportunities to improve?

The answer lies in applying a scientific approach to our improvement efforts – a structured learning and improvement process. Many know this as action learning or action research. We call it PDSA: the Plan-Do-Study-Act improvement cycle.

The history of PDSA

The PDSA cycle is attributed to the work of Walter Shewhart, a statistician working with the Bell Telephone Laboratories in New York during the 1930s (although it can be traced back further to John Dewey’s profound writings on education in the 1800’s). 

Shewhart was the first to conceptualise the three steps of manufacturing — specification, production, and inspection – as a circle, rather than a straight line. He observed that when seeking to control or improve quality, there must be reflection upon the outcomes achieved (inspection) and adjustments made to the specifications and production process.

He proposed the move from this:

The linear process of specification, production and inspection
Figure 1. The linear process of specification, production and inspection

To this:

The cycle of specification, production and inspection
Figure 2. The cycle of specification, production and inspection

You may notice similarities with the traditional teaching methods of plan, teach, and assess.

The linear approach to Plan, Teach, Assess
Figure 3. The linear approach to Plan, Teach, Assess

In recent times there has been a focus in schools on “assessment for learning” (in contrast to “assessment of learning”). It parallels Shewhart’s observation of the need to close the loop in manufacturing.

Shewhart went on to identify the three steps of manufacturing as corresponding to the three steps of the dynamic scientific process of acquiring knowledge: making a hypothesis (or theory), carrying out an experiment, and testing the hypothesis (see Figure 4).

The three steps of acquiring knowledge
Figure 4. The three steps of acquiring knowledge

Source: Adapted from Walter Shewhart, 1986, Statistical Method from the Viewpoint of Quality Control, Dover, New York, p. 45.

With these thoughts, Shewhart planted the seeds for W. Edwards Deming to develop the Plan-Do-Check-Act cycle, which was published as the Shewhart cycle in 1982. Deming taught the Shewhart cycle to the Japanese from 1950 who picked it up and renamed it the Deming Cycle.

The PDSA Cycle

Deming published the cycle in The New Economics in 1993, as the Plan–Do–Study–Act (PDSA) cycle. He changed “check” to “study” in order to more accurately describe the action taken during this step. PDSA is the name by which the cycle has become widely known in recent times. (Figure 5.)

The Deming Cycle
Figure 5. The Deming Cycle

Source: W. Edwards Deming, 1993, The New Economics: For industry, government, education, MIT, Cambridge.

The Plan-Do-Study-Act cycle is a structured process for improvement based on a cycle of theory, prediction, observation, and reflection.

There are, of course, many variants of the improvement process, with many and varied names. In overview, the concepts are the same.

There is a strong tendency for people to want to race through the “plan” stage and get straight into the “do” stage. Schools in particular find it difficult to make time for the reflective step of “study”. Many individuals and teams just want to get into the action and be seen to be making changes, rather than reflecting on whether the change has been an improvement, or just a change.

A detailed and structured process

Where an improvement opportunity is of a significantly complex nature, a comprehensive application of the PDSA process is necessary.

Our work in industry, government and education over the past two decades has shown the nine step PDSA process, illustrated in Figure 6, to be particularly effective. This nine step process has been compared with dozens of alternate models of PDSA and refined over the past two decades.

A nine step PDSA process
Figure 6. A nine step PDSA process

In developing such a process, there is a balance to be struck between the technical considerations of having a robust process that will deal with diverse contexts and issues, and the simplicity that makes the improvement process accessible and practical for busy people. Over the years, we have continually sought to simplify the model to make it more accessible. For nearly a decade, the nine steps have remained constant, but the specific actions and tools comprising each step have been progressively refined.

The process has beed designed to ensure it meets the criteria necessary to achieve sustainable improvement, namely:

  • Be clear about mutually agreed purpose
  • Establish a shared vision of excellence
  • Focus upon improving systems, processes and methods (rather than blaming individuals or just doing things)
  • Identify the root causes of dissatisfaction, not the symptoms
  • Carefully consider the systemic factors driving and restraining improvement, including interaction effects within the system and with containing systems
  • Identify strengths to build upon as well as deficiencies to be addressed
  • Identify the clients of the improvement efforts and understand their needs and expectations
  • Achieve a balance in addressing the competing, and sometimes contradictory, needs and expectations of stakeholders in improvement efforts
  • Be clear about the theory for improvement, and use this to predict outcomes
  • Reflect on the outcomes of improvement efforts, in the context of the selected theory for improvement, in order to refine the theory for improvement
  • Use operational definitions to ensure clarity of understanding and measurement
  • Not copy others’ practices without adequate reflection about their proper implementation in a new context — adapt not adopt.

These requirements have been reflected in the nine step PDSA improvement process shown in Figure 6.

To provide clear guidance, we have developed a comprehensive PDSA chart (Figure 7). The PDSA improvement process is framed as a series of questions to be answered by the improvement team (or individual). These questions address the considerations necessary to achieve sustainable improvement as detailed above. The process also refers the user to specific quality learning tools that can be used to address the questions, promoting collaboration and evidence-based decision-making.

A detailed nine-step PDSA cycle
Future 7. A detailed nine-step PDSA cycle

This is not a perfect process for improvement — there is no such thing. It is a process for improvement that can be adapted (not adopted), applied, studied, and improved. It can be used as a starting point for others, like you, who may wish to create a process of their own.

There are enormous benefits to applying a standard improvement process: an agreed improvement process that everybody follows. This can be standard across the school or whole district. Everyone can use the same approach, from students to superintendent. The benefits, apart from maximising the return on effort, time and resources, include having a common and widely used model, language, set of concepts, and agreed tools.  It also establishes an agreed process that can itself be reviewed and improved, with the contribution of everybody in the organisation.

 

Watch a video of PDSA applied to year one writing.

Watch a video of PDSA applied within a multi-age primary classroom.

Read or watch a video about student teams applying PDSA to school improvement.

Download the detailed nine-step PDSA chart.

Purchase IMPROVING LEARNING: A how-to guide for school improvement to read more about the quality improvement philosophy and methods.

Purchase our Learning and Improvement Guide: PDSA Improvement Cycle.