PE, PDSA and Student Voice

We have previously discussed the power of the Plan-Do-Study-Act (PDSA) cycle in bringing about collaborative, sustainable improvement. We have also emphasised the importance of allowing students to play a key role – giving students a ‘real voice’ – in improving their school and classroom. In this blog, we share another example. This time, how the PDSA process was used by a teacher and students to improve learning and engagement in their physical education (PE) classroom. (You can also view this as a QLA case study video.)

Chris, PE Teacher
Teacher, Chris, with her PE class PDSA storyboard

Chris is a leading specialist teacher at a Victorian primary school. She observed the school’s Year 6 students becoming increasingly disengaged during their weekly PE lessons. PE teachers were stressed and student behaviour was worsening. No one was enjoying PE!

Chris decided it was time to set students and teachers to work to improve PE by applying the PDSA cycle.

As we have seen previously:

PDSA is a highly effective improvement approach, based upon a cycle of theory, prediction, observation, and reflection.

It involves applying a structured process to achieve sustainable improvement.

A nine step PDSA process
A nine step PDSA process

This includes:

  • defining the opportunity for improvement by agreeing the purpose and establishing a shared vision of excellence
  • focusing improvement efforts on a system or process (rather than blaming individuals)
  • identifying root causes not symptoms
  • developing and applying a theory for improvement
  • reflecting on the outcomes achieved to agree a new ‘best method’ or further improvement needed.

Here’s how…

Chris applied the PDSA process with her students. They documented a comprehensive storyboard to capture their agreements, the data collected, and to reflect their progress in applying the PDSA process.

Here’s what they did:

  1. Students and teachers discussed to agree the opportunity for improvement – to improve their PE classes.
  2. They studied the current situation – what did PE currently look like, feel like, and what was going on? They agreed: students were disengaged, disinterested and not putting in their best efforts; some students were disrupting the class, preventing others from enjoying PE; and teachers were frustrated.

    CSV014f RHPS PDSA PE.00_00_54_14.Still001
    PDSA storyboard extract: brainstorm of the current situation in PE
  3. They collected data to measure the extent of the dissatisfaction with PE. A correlation chart was used to measure student
    CSV014f RHPS PDSA PE.00_01_30_08.Still002
    PDSA storyboard extract: collecting data using a correlation chart – how much are students enjoying and learning in PE?

    perception. The data revealed low levels of student enjoyment
    (fun) and learning in the PE classroom.

  4. Students then brainstormed and themed the drivers and barriers associated with motivation and participation in their PE classroom. They used sticky notes and an affinity diagram to facilitate this. The major barriers they identified were: ‘inappropriate behaviour’, ‘boring classes’, ‘lack of student choice’, ‘the weather’ and ‘wasting time’.

    CSV014f RHPS PDSA PE.00_01_43_07.Still003
    PDSA storyboard extract: affinity diagram of the barriers to student motivation and participation in PE
  5. These barriers were analysed to agree the root causes using an interrelationship digraph. (They knew that by working on the root causes of their problem that they would realise the greatest return on their improvement efforts.) For the PE students this revealed ‘lack of choice’ as the major or root cause. A lack of choice by students in their PE lessons was seen as a major barrier to participation and motivation. It was impacting upon the other causes and driving the observed problems with behaviour and performance in their classroom.

    CSV014f RHPS PDSA PE.00_02_21_16.Still004
    PDSA storyboard extract: interrelationship digraph analysing the root causes of a lack of student motivation and participation in PE
  6. A bone diagram was used with students to further explore the current situation, and to agree a vision of excellence for PE – what they wanted PE to be like. The resulting student vision showed students believed: student choice, a clear purpose and process for each session, appropriate behaviour, more minor games, a mix of skills, effective use of time, student’s understanding what was expected, and knowing whether they were improving; were the key characteristics students believed were essential for a great PE lesson.

    CSV014f RHPS PDSA PE.00_02_50_14.Still005
    PDSA storyboard extract: bone diagram agreeing a vision of excellence for PE
  7. They brainstormed possible solutions which included: ‘kids teaching kids’,  students ‘choosing activities’ and ‘writing their own report’,   agreeing a student ‘code of behaviour’, clarifying expectations (quality criteria: ‘know what a good throw, jump looks like’), and students ‘making up games’.

    CSV014f RHPS PDSA PE.00_03_29_13.Still007
    PDSA storyboard extract: brainstorm of possible solutions to improve PE
  8. These solutions helped them to develop a ‘theory for improvement’ comprising the following key strategies:
  • multi-voting to agree the focus of each lesson
  • agreeing the lesson format – flowcharting the teaching and learning process
  • appointing student skill coaches and documenting skill cards to help the coaches do their job
  • students undertaking peer evaluation together with their teacher/coach. They developed capacity matrices for key areas of learning to help them to do this. They also documented quality criteria describing how to execute essential skills with a high degree of excellence (e.g. how to do an overhand throw). Students used the capacity matrices and quality criteria as the basis for reflection and evaluating their progress in PE
  • agreeing a code of behaviour
  • everyone reflecting and giving feedback after each lesson.
CSV014f RHPS PDSA PE.00_03_46_16.Still008
PDSA storyboard extract: agreed strategies to improve PE
CSV014f RHPS PDSA PE.00_04_32_02.Still010
PE – capacity matrix for gymnastics
CSV014f RHPS PDSA PE.00_04_48_15.Still012
PE – quality criteria for an overhand throw

The outcome?

The PE classes applied the agreed strategies and new processes, and a few weeks later reflected on the effectiveness of the improvements they had made (the ‘study’ phase of the PDSA
process).

  • Behaviour and engagement improved. Students were motivated and learning
  • Students ‘owned’ and were running the PE lessons with minimal guidance from PE teachers! They were responsible for their learning
  • PE lessons had a productive ’buzz’! Students were excited. Teachers were happy.

The processes they had developed together were adopted as the new way for PE lessons.

Chris described the PDSA based collaborative process as having an amazing impact.

Applying the PDSA process, working ‘with the kids’ and not  ‘doing to the kids’,  brought about significant positive change to PE lessons – improving the way teachers were teaching and students were learning – to great effect!

Learn more…

Download the detailed 9-step PDSA poster.

Purchase IMPROVING LEARNING: A how-to guide for schools, to learn more about the quality improvement philosophy and methods.

Purchase our learning and improvement guide: PDSA Improvement Cycle.

Watch a video of PDSA applied to year one writing.

Watch a video of PDSA applied within a multi-age primary classroom.

Watch a video about student teams applying PDSA to school improvement.

Understanding Variation 4 – Stop Tampering!

This is the final in a series of four blog posts to introduce the underpinning concepts related to variation in systems. In the first post we discussed common cause and special cause variation. The second explored the concept of system stability. The third explained system capability. This final post discusses tampering – making changes to systems without understanding variation. Tampering makes things worse! This is an edited extract from our book, Improving Learning.

Stop tampering

Let us begin with a definition of tampering.

Tampering: making changes to a system in the absence of an understanding of the nature and impact of variation affecting the system

The most common forms of tampering are:

  1. overreacting to evidence of special cause variation
  2. overreacting to individual data points that are subject only to common cause variation (usually because these data are deemed to be unacceptable)
  3. chopping the tails of the distribution (working on the individuals at the extreme ends of the distribution without addressing the system itself)
  4. failing to address root causes.

Tampering with a system will not lead to improvement.

Let us look more closely at each of these forms of tampering and their impact.

Tampering by overreacting to special cause variation

Consider the true story of the young teacher who observed a student in the class struggling under the high expectations of her parents. The teacher thought that the student’s parents were placing too much pressure on the child to achieve high grades, which the teacher believed to be beyond the student. The young and inexperienced teacher wrote a letter to the parents suggesting they lower their expectations and lessen the pressure on their daughter. Receipt of this letter did not please the parents, who demanded to see the school Principal. Following this event, the Principal required all correspondence from teachers to parents to come via her office. Faculty heads within the school, not wanting to have teachers in their faculties make the same mistake, required that correspondence come through them before going to the Principal.

The end result was a more cumbersome communication process for everyone, which required more work from more people and introduced additional delays. The principal overreacted to a special cause. A more appropriate response would have been for the principal to work one-on-one with the young teacher to help them learn from the situation.

Making changes to a system in response to an isolated event is nearly always tampering.

A more mundane example of this type of tampering is when a single person reports that they are cold and the thermostat in the room is changed to increase the temperature. This action usually results in others becoming hot and another adjustment being made. If any individual in the room can make changes to the thermostat setting, the temperature will fluctuate wildly, variation will be increased and more people will become uncomfortable, either too hot or too cold.

Most people can think of other examples where systems or processes have been changed inappropriately in response to isolated cases.

The appropriate response to evidence of special cause variation is to seek to understand the specific causes at play and have situations dealt with on a case-by-case basis, without necessarily changing the system.

Occasionally, investigation of a special cause may reveal a breakthrough. The breakthrough may be so significant that changes to the system are called for in order to capitalise on the possibilities. This is, however, rare and easily identified when it is the case.

Tampering by overreacting to individual data points

Another common form of tampering comes from overreacting to individual data points. Such tampering is very common and very costly.

Figure 1 presents a dot plot of mean Year 3 school results, measured across five key learning areas by NAPLAN in 2009. These results are from an Australian jurisdiction and include government-run schools and nongovernment schools. For the purpose of the argument that follows, these data are representative of results from any set of schools, at any level, anywhere.

Figure 1 Dot plot of school mean scores
Figure 1 Dot plot of school mean scores

The first thing to notice is that there is variation in the school mean scores. (Normal probability plots suggest the data appear to be normally distributed, as one would expect.) The system is stable and is not subject to special causes (outliers).

The policy response to variation such as this is frequently a form of tampering. Underperforming schools are identified at the lower ends of the distribution and are subjected to expectations of improvement, with punishments and rewards attached.

This response fails to take into account the fact that data points within the natural limits of variation are only subject to common cause variation.

To single out individual schools (classes, students, principals or teachers) fails to address the common causes and fails to improve the system in any way.

When this approach is extended to all low performing elements, it becomes an even more systematic problem: attempting to chop the tail of the distribution.

Tampering by chopping the tails of the distribution

Working on the individuals performing most poorly in a system is sometimes known as trying to chop the tail of the distribution. This is also tampering.

There are three main reasons why this is bad policy, all of which have to do with not understanding the nature and impact of variation within the system.

Firstly, it is not uncommon to base interventions on mean scores. Yet it is well known within the education community that there is much greater variation within schools than there is between schools. Similarly, there is much greater variation within classes than between classes within the same school. Averages mask variation.

Consider two schools. School A (Figure 2) is performing at the lower end of the distribution for reading scores — with a mean reading score of approximately 390. School B (Figure 3) has a mean reading score approximately 30 points higher.

Figure 2. Histogram of Year 3 student reading scores (School A)
Figure 2. Histogram of Year 3 student reading scores (School A)
Figure 3. Histogram of Year 3 student reading scores (School B)
Figure 3. Histogram of Year 3 student reading scores (School B)

The proportion of students in each school that is performing below any defined acceptable level is fairly similar. School A, for example, has 12 students with results below 350. School B has seven. In some systems, resources are allocated based on mean scores. Those with mean scores beyond a defined threshold are entitled to resources not available to those with mean scores within certain limits. If School A and School B were in such a system and the resourcing threshold was set at 400, for example, School B could be denied resources made available to School A, simply because its mean score is above some defined cut-off point.

Where schools or classes are identified to be in need of intervention based on mean scores, the nature and impact of the variation behind these mean scores is masked and ignored. If the 12 students in School A receive support, why is it denied to those equally deserving seven students in School B?

Secondly, the distribution of these results fails to show evidence of special cause variation. The variation that is observed among the performance of these schools is caused by a myriad of common causes that affect all schools in the system.

Singling out underperforming schools for special treatment each year does nothing to address the causes common to every school in the system, and fails to improve the system as a whole.

Even if the intervention is successful for the selected schools, the common causal system will ensure that, in time, the distribution is restored, with schools once again occupying similar places at the lower end of the curve. The system will not be improved by this approach.

Thirdly, this approach consumes scarce resources that could be used to examine the cause and effect relationships operating within the system as a whole and taking action to improve performance of the system as a whole.

In education, working on the individuals performing most poorly in a system is a disturbingly common approach to improvement. It never works. A near identical strategy is used within classes to identify students who require remediation. The “bottom” — underachieving — kids are given a special program; they are singled out. Sometimes the “top” — gifted and talented — kids are also singled out for an extension program.

This is not so say that we should not intervene when a school is strugglingor when a student is falling behind. Nor are we suggesting that students and schools who are progressing well should not be challenged to achieve even more. It is appropriate to provide this support and extension to those who need it. The problem is that doing so does not improve the system. Such actions, when they become as entrenched as they currently are, are merely part of the current system.

It should be noted that focussing upon poor performers also shifts the blame away from those responsible for the system as a whole and onto the poor performers.

The mantra becomes one of “if only we could fix these schools/students/families”. The responsibility lies not with the poor performers, but with those responsible for managing the system: senior leaders and administrators. It is a convenient, but costly diversion to shift the blame in this way.

If targeting the tails of the distribution is the primary strategy for improvement, it is tampering and it will fail. Unless action is taken to improve the system as a whole, the data will be the same again next year, only the names will have changed. Over time, targeting the tails of the distribution also increases the variation in the system.

This sort of tampering is not restricted to schools and school systems. It is very common, and equally ineffective, in corporate and government organisations. It is quite common that the top performers are rewarded with large bonuses, while poor performers are identified and fired or transferred. Sales teams compete against each other for reward and to avoid humiliation. Such approaches do not improve the system; they tamper with it.

Tampering by failing to address root causes

Tampering commonly occurs when systems or processes are changed in response to a common cause of variation that is not a root cause or dominant driver of performance.

People tamper with a system when they make changes to it in response to symptoms rather than causes.

It is easy to miss root causes by failing to ask those who really know. Who can identify the biggest barriers to learning in a classroom? The students.

Teachers can be quick to identify student work ethic as a problem in high schools. It is a rare teacher who identifies boring and meaningless lessons as a possible cause. Work ethic is a symptom, not a cause. It is not productive to tackle the issue of work ethic directly. One must find the causes of good and poor work ethics and address these in order to bring about a change in behaviour.

There has been a concerted effort in recent years in Australia to decrease class sizes, particularly in primary schools. Teachers are pleased because it appears to reduce their work load and provides more time to attend to each student. Students like it because they may receive more teacher attention. Parents are pleased because they expect their child to receive more individualised attention. Unions are happy because it means more teachers and therefore more members. Politicians back it because parents, teachers and unions love it. Unfortunately, the evidence indicates that these changes in class size have very little impact on student learning. (See John Hattie, 2009, Visible Learning, Routledge, London and New York.) This policy is an example of tampering on a grand and expensive scale. Class size is not a root cause of performance in student learning.

Managers need to study the cause and effect relationships within their system and be confident that they are addressing the true root causes. Symptoms are not causes.

Every time changes are made to a system in an absence of understanding the cause and effect relationships affecting that system, it is tampering.

Tampering will not improve the system; it has the opposite effect.

Read about four types of measures, and why you need them.

Read about common cause and special cause variation.

Read about system stability.

Read about system capability.

Read more in our comprehensive resource: IMPROVING LEARNING – A how-to guide for school improvement.

Purchase our learning and improvement guide Using data to improve.

 

Understanding Variation 3 – System Capability

This is the third in a series of four blog posts to introduce the underpinning concepts related to variation in systems. In the first post we discussed common cause and special cause variation. The second explored the concept of system stability. In this post, we explore whether a system is capable of consistently meeting expectations. This is an edited extract from our book, Improving Learning.

System Capability

Just because a system is stable does not mean that it is producing satisfactory results. For example, if a school demonstrates average growth in spelling of about 70 points from Years 3 to 5, is this acceptable? Should parents be satisfied with school scores that range from 350 to 500? These are questions of system capability.

Capability relates to the degree to which a system consistently delivers results that are acceptable — within specification, and thus within acceptable limits of variation.

Capability: the degree to which a system consistently delivers results that are within acceptable limits of variation.

Note that stability relates to the natural variation that is exhibited by a system, capability relates to the acceptable limits of variation for a system. Stability is defined by system performance. Capability is defined by stakeholder needs and expectations.

It is not uncommon to find systems that are both stable and incapable; systems that consistently and predictably produce results that are beyond acceptable limits of variation and are therefore unsatisfactory. No doubt, you can think of many examples.

Cries for school systems to “raise the bar” or “close the gap” are evidence that stakeholders believe school systems to be incapable (in this statistical sense) because the results they are producing are not within acceptable limits of variation. However, the results are totally predictable, the system is stable, but the results are unsatisfactory; the system is incapable.

In Australia, NAPLAN defines standards for student performance. National minimum standards are defined to reflect a “basic level of knowledge and understanding needed to function in that year level”.

Proficiency standards, which are set higher than the national minimum standards, “refer to what is expected of a student at the year level”. Depending on the year level and learning area, between two per cent and 14 per cent of students fail to reach national minimum standards. By definition, then, the Australian education system is incapable. It fails to consistently produce performance that is within acceptable limits of variation, because a known proportion of students fails to meet minimum standards, let alone perform at or better than the expected proficiency.

Figure 1 shows the spelling results for 161 Year 9 students at an Australian high school, as measured by NAPLAN. These results fall between the upper and lower control limits, which have been found to be at 297 and 794 respectively. Careful analysis failed to reveal evidence of special cause variation. This system appears to be stable. The national minimum standard for Year 9 students is 478. In this set of data, there are 33 students performing below this standard. Thus we can conclude that the system which produced these spelling results is stable but incapable.

Histogram of Year 9 student NAPLAN scores in spelling, indicating a system that is stable but incapable.
Figure 1 Histogram of Year 9 student NAPLAN scores in spelling, indicating a system that is stable but incapable.

 

Taking effective action: Responding appropriately to system variation.

With an understanding of the concepts of common cause and special cause variation, responding to system data becomes more effective. The flowchart in Figure 2 summarises an appropriate response to system data.

Flowchart: responding appropriately to system variation
Figure 2. Flowchart: responding appropriately to system variation

In the next post, we describe what can happen when we don’t respond appropriately to variation – tampering! Making things worse.

Read about four types of measures, and why you need them.

Read about common cause and special cause variation.

Read about system stability.

Read more in our comprehensive resource: IMPROVING LEARNING – A how-to guide for school improvement.

Purchase our learning and improvement guide Using data to improve.