Whole school LNF Results

Up until 2 years ago, I used to give a “using data" in the classroom Twilight to our NQTs and new staff. Stuff like - baseline testing, CATS, FFT and record keeping. Went down quite well - but maybe I laboured the importance of real statistics too much and people ended up number blind.
Anyway, I don’t do that one any more - not because of the afore mentioned over enthusiasm - but rather we stopped getting CATs data and FFT fell out of favour. Instead we went to a 100% teacher generated target model, based in KS2 and KS3 in house testing. That was a great shame, as it removed the necessity for a degree of data sensitivity within the teaching body. In particular graphs like:
Which enabled class teachers, heads of years and SLT to visually and quickly explore the CATS scores of a cohort.
The beauty of such a graph is that, normalised to 100 - it shows those students that are a long way from this “ideal" score. This allowed teachers to group learners together and to provide interventions for individuals and groups of learners. It also allowed insight into individuals - for example those circled learners had been placed in a lower set - as they were struggling with the English / Literacy elements, as substantiated by their V CATs score. However, they were considerably “chatty" and disruptive - again acknowledged by the higher than average NV CATS score. Knowing this, individual interventions and in class differentiation is able to be targeted at their needs.
So, not doing this anymore - and with more and more “data stuff" being devolved to the school “Data Manager" teachers are prevented from the empowering act of data exploration.
Now, the LNF (in Wales) has led to whole school data on Literacy and Numeracy “scores" for learners Yr 2 to Year 9 - rich pickings and surely timely for some data analysis.
Analysing LNF Results
The Welsh Government provides a tool of sorts to analyse LNF results for a cohort: The Diagnostics Support Tool (that breaks the cohort down by question) - useful for planning wider school interventions. I’m interested in raw data for the entire cohort - the data showing LNF Literacy and Numeracy scores for a year group(s).
The LNF scores are normalised to 100 for both Literacy and Numeracy - with a standard deviation of 15. So we can expect 68% of the scores to lie between 85 and 115. Apparently scores below 70 and above 140 can not be normalised. For all practical senses, the range is 70 to 140, with a Mean score of 100.
Interestingly, this would seem to imply some element of skewed distribution as the Mean of a normal distribution from 70 to 140 should be 105. So maybe we can expect a relatively small number of high values.
After a Twitter shout out, I was lucky enough to receive a set of LNF data for a Year 7 cohort (depersonalised, and no, I’m not revealing which school) :

The red lines show the 85-115 band, inside of which we expect 68% of the cohort.
  • Quadrant A shows a significant number of learners with high L and N scores - do they need additional provision?
  • Quadrant C shows learners who will need additional support in both L and N
  • Quadrant D and E show learners with high scores in L or N and low scores in the other - they are likely to be frustrated learners
  • Quadrant F and I shows learners who are Normal-Low in L and N score - are these your disruptive learners?
  • Quadrant G and H show those learners who are High-Normal for L and N - are these your quiet and invisible learners?
Visualising data like this does not give you the answers, in fact it leads to more questions:
  • On the basis of what data do you set?
  • Do you set or mixed ability teach?
  • Can you identify learners with “issues"? Where are they on a graph like this - does that explain why they are disruptive?
Call to Action:
  • Have you seen the LNF data for you class, cohort or school?
  • Have you plotted it like this?
  • Does your Data Manager have this information for you?

comments powered by Disqus