Posts Tagged ‘A level Standards’

GCSEs, A levels, and degrees: Another Perspective

August 28, 2014

My friend Bernard Naylor commented on the last article and his comments seemed insightful and considered. So I thought they deserved re-broadcasting away from the ‘invisible’ comments section.

Bernard wrote:

The best thing to do with GCSEs is to abolish them as the school-leaving age is being raised to eighteen. The education of our children, and the children themselves, suffer from too much external examination and assessment. (Schools should of course be conducting assessments internally as an ongoing duty – as no doubt they mostly are.) Other countries, with more successful education systems manage without this incessant micro-management from a government department.

I think this is a fair point, and amplifies this recent Guardian article. Given the current mess, it is hard to argue against this, but I am still unconvinced.

If the abolition of GCSEs were coupled with a proper recognition that both academic and vocational training were important then I can imagine benefits at both ends of the academic ability range.

But this is a difficult balance to achieve and has been screwed up before. And until a clear alternative (???) is proposed that addresses current failures, I would prefer to stick with the status quo.

Why? Because one of the major problems with educational policy has been that it has not remained the same for more than a couple of years at a time – and this endless change is  in itself demoralising and counter-productive.

On no account should the government be setting syllabuses. It is a recipe for political interference with education. It may just about be OK in Physics, but in English Literature and History (for example), the recently departed Secretary of State has been having a pernicious effect, with (among other faults) too much harking back to what was OK when he was at school, donkeys years ago.

There should be independent statutorily established bodies with responsibility to determine overall course content and educational objectives – and professional teachers should be trusted much more at the detailed level. The function of the inspectorate should be one of monitoring and mentoring, It’s just a short time since Ofsted decided that ‘Satisfactory’ actually meant – well – unsatisfactory! Every school has room for improvement. Giving any school a judgment that doesn’t imply that is simply wrong. We can all get better!

And I entirely agree with you about the lunacy of having ‘competition’ in the examination system.

I think that is what I meant to say. My key point is that there should be only one body setting exams.

Publishers should compete to publish good books and other learning resources.

Just a final point about universities’ calibration of examination marking, which of course I watched quite closely, from a detached standpoint, for about a quarter of a century. If anything above 70% is a first, and anything below 30% is a fail, that means that 60% of the calibration scale (i.e. above 70 and under 30) has no meaning. Calibration being one of your strong points, can you possibly justify that? I never heard a reasonable defence of it from any of my teacher colleagues!

Let me try to defend it. At the top of the scale there were occasionally students – less than one a year at University College London – whose average mark was above 90%. Such students were truly exceptional.

Although these ‘90%-students’ shared a degree classification with ”70%-students’ – the range of the scale allowed their exceptional abilities to be noted – and I promise you, their abilities were noticed!

At the bottom of the scale, one simply needs to pick a level represents ‘this student hasn’t got a clue’. Picking 30% – or 35% at UCL if I remember correctly – is arbitrary. Arguably it should be higher.

And talking of ‘academia’, one thing that impressed me during my academic career was the care taken in Exam Board meetings. I don’t think students had a clue that every single student’s mark and performance was considered – often at great length.

And the key question asked was whether the mark made ‘sense’. In other words, the Exam Board used the exam results as an aid to judgement – and not a substitute for it. It was very rare for the board to change marks – that would in general be unfair. But where fairness demanded it, we did it.

GCSE and A level results: Three steps to make things better

August 22, 2013
Graph from the BBC showing the increasing GCSE passes. No one thinks this rise is due to increasing educational standards and no one thinks this years fall is due to a fall in standards.

Graph from the BBC showing the increasing GCSE passes. No one thinks this rise is due to increasing educational standards and no one thinks this years fall is due to a fall in standards.

As my own children approach the year in which they will sit GCSE and A level exams, the annual brouhaha  over exam results feels a bit more personal. And my anger over the betrayal of students and the governments abnegation of responsibility in this field grows more intense.

“It wasn’t like this when I were a lad ..”. No really: it wasn’t. Back in the 60’s and 70’s, the results were always the same: for example the top 7.5% (I think that was the number) received an A, the next n% received a B and so on. This approach served to discriminate amongst the candidates. But it didn’t register whether students knew more or less than in previous years.

Then exams were changed in many ways simultaneously. Syllabuses were reduced, continuous assessment introduced, exam boards became wholly-owned by book publishers, and ‘absolute’ marking became the norm. The result was decades of grade inflation and political bickering.

Many educational changes ‘since I were a lad’ have been really positive, and I suspect the general standard of teaching is higher. But I don’t know anyone who thinks that the fact that exam results began to ‘improve’ after 1986  reflects any actual ‘improvement’ in education. In the same way, nobody believes that this year’s small fall in A* to C grades reflects any actual ‘decline’ in educational standards.

It seems that the exam results are telling us nothing about educational standards and this is obviously unsatisfactory. And all this has happened during a period in which schools and exam boards have been subject to more inspections than at anytime in history. I won’t go into the causes of this shameful and ‘almost corrupt’ episode, but the answers are simple,

  • Firstly, publishers should not be allowed to own or influence exam boards. ‘Competition’ to produce the easiest exams only drives down standards. Exam boards should set standards and exams, and publishers should produce books that teach the subject in general, not how to pass particular exams in the subject. Ideally there would be only one exam board for each subject.
  • Secondly, grade inflation should be eliminated by making A*, A and B grades correspond to fixed fractions of the candidates. Grade A* would mean the top 5% (say), A the next 10% (say) and B the next 20%. However the C mark should be marked on achievement against a standard rather than against other candidates. This allows improvements in education to be reflected in improved results but keeps the significance of higher marks.
  • Thirdly, governments then need to stop changing the exam system every few years. A politically-balanced commission should consider changes every 20 years with no ability to change the rules in intervening years. It needs that length of time to measure the effect of any changes which have been made.

I could go on, but I won’t. Education is a precious and important activity and the more kerfuffle there is around this topic the more difficult it is to make the learning magic shimmer.  So I will just wish all teachers and students best wishes for the last weeks of the summer holiday and the start of the new term.

A level standards: A national disgrace

April 4, 2012
Scores on the Prior Knowledge Test (PKT) given to all students at the start of their B.Sc degree course at Bristol University physics department. Scores have systematically declined despite the amazing increase in apparent performance at A level

Scores on the Prior Knowledge Test (PKT) given to all students at the start of their B.Sc degree course at Bristol University Physics Department. The test has remained essentially unchanged, but scores have systematically declined despite the amazing increase in apparent performance at A level.

In 1985 around 9% of students obtained ‘A’ grades in their A-level exams. Since then, the percentage has increased every year and in 2010 the result was 27%. Is that because students are working harder and teachers are teaching smarter? Well, I can’t answer that question for every subject, but for physics the answer is an unequivocal ‘No’.

I can say this with certainty because the experience has been mournfully recounted to me by many teachers. And this month the anecdotal evidence was confirmed in an interesting paper by Professor Peter Barham published in Physics Education (download pdf here). Since 1975 the entire student intake to the physics course at Bristol University has taken essentially the same Prior Knowledge Test (PKT). Professor Barham published an analysis of these results and I have summarised one aspect of his analysis in the graph at the head of the page. It shows that scores testing prior knowledge of physics have declined by around 25% and are still falling.

Summarising, our national exam structure is reporting continuous improvement, but in reality students can achieve less and less. As I have commented previously, results such as this represent a national disgrace. This shameful situation is the product of:

Together these elements have combined to drive down educational standards. If this were a sport’s challenge these steps would involve:

  • Lowering the height of a ‘high jump test’ year upon year.
  • Stating that students could jump one metre when in fact they could only jump 25 centimetres, but they could do it 4 times.
  • The high jump inspectors guarantee the average height of the bar, but it is higher in some places and lower in others. The high jump inspectors publish guides explaining where the low points are.

In this case we can see that reports of continuous improvements in ‘high jump’ would be laughable. However, the educational system really matters. It is the way in which we pass on our combined cultural appreciation from generation to generation. And the confidence trick in which our educational establishment has conspired should make us angry.

However things appear to be changing. The modularisation of courses is slowly being undone. For his GCSE exams my second son will have to take all his science exams at the end of the course, a more difficult task than that which faced my first son. Michael Gove has now announced that in future A level ‘content’ will need to ‘involve’ academics at Universities. At this point it is impossible to foresee what this ‘involvement’ actually means, but it is probably a step in the right direction. However only when exam boards become fully independent of publishing companies do we stand a chance of reversing the continuous devaluation of educational qualifications.

What are ‘A’ levels for?

August 21, 2010
Percentage of students awarded A grade in A levels since 1965

The UK’s A level results came out this week and amongst all the opinions and emotions expressed I found the above graph profoundly significant. Essentially the whole ‘Exam Debate’ comes down to one simple question: Does this graph indicate a success, or a failure? Those who think it indicates a success attribute the rise to better teaching and those who think it indicates a failure attribute the rise to falling standards. I attribute the rise to what I call ‘wrong headedness’ – a complete inability to understand what A levels – and exams more widely – are for.

Before 1985, getting an A grade at A level meant that a student had  received an mark in the top 9% of results. In most subjects this mark was determined by performance in one or two exams taken at the end of a two year course of study. The purpose of the grading was to discriminate amongst the students. It offered teachers collectively no chance to improve. After 1985, everything changed and has kept changing ever since. The major shift was to a system where an A grade indicated a particular level of achievement. This offered the possibility that if teaching improved then more children would receive A grades. However changes in the style of exam, the modular exam system (which means that no one is ever tested on the whole syllabus) and the frankly appalling system in which the exam boards became wholly owned by publishers, means that a simple interpretation of the above as representing an improvement is not very convincing.

One role of our school system is to pass on the accumulated knowledge and understanding of our culture: this sounds rather pompous but it is true. It is hard to think of a more important task for any culture to undertake. The role of exams within this system is (very broadly) to check that this is being done. More specifically it needs to BOTH check that students know certain things by demonstrating a basic understanding AND to discriminate amongst the students and identify those with special talents or affinities. The above graph – and its seemingly unstoppable linear trend – indicates a collective failure to recognise the second purpose of exams.

%d bloggers like this: