About me

For my professional website, with information about my research, publications and teaching, see www.sites.google.com/site/rmlevans.

Wednesday, 24 December 2014

The value of idealized models

(First published on physicsfocus)

Every physicist has to know the joke about the dairy farmer. Seriously, if you don't know it, you can't call yourself a physicist. It really should be added to the IoP's requirements for the accreditation of physics degrees. If you have such a degree, and none of your lecturers ever told you the joke, please write a letter of complaint to your alma-mater immediately. In case you find yourself in that unhappy situation, here it is:

A dairy farmer, struggling to make a profit, asked the academic staff of his local university to help improve the milk-yield of his herd. Realising it was an interdisciplinary problem, he approached a theoretical physicist, an engineer and a biologist. After due consideration, the engineer contacted him. "Good news!" she said. "My new design of milking machine will reduce wastage, increasing your yield by 5%." The farmer thanked her, but explained that nothing short of a 100% increase could save the farm from financial ruin. The biologist came up with a better plan: genetically-modified cows would produce 50% more milk. But it was still not enough.

At last the theoretical physicist called, sounding very excited. "I can help!" he said. "I've worked out how to increase your milk yield by six hundred percent."

"Fantastic!" said the farmer. "What do I have to do?"

"It's quite straightforward," explained the physicist. "You just have to consider a spherical cow in a vacuum..."

I shouldn't break the cardinal rule never to explain a joke, but... the gag works because you recognize the theoretical physicist's habit of simplifying and idealizing real-world problems. At least, I hope you recognise it, although I wonder if it's a dying art. With the availability of vast computer-processing power and fantastically detailed experimental data in many fields, there is an increasing trend to construct hugely complex and comprehensive theoretical models, and number-crunch them into submission. Peta-scale computers can accurately simulate the trajectories of vast numbers of atoms interacting in complex biological fluids, and can even model the non-equilibrium thermodynamics of the atmosphere realistically enough to fluke an accurate weather forecast occasionally.

Superficially, it might seem like a good thing if our theoretical models can match real-world data. But is it? If I succeed in making a computer spit out accurate numbers from a model that is too complex for my meagre mortal mind to disentangle, can I claim to have learnt anything about the world?

In terms of improving our understanding and ability to develop new ideas and innovations, making a computer produce the same data as an experiment has little value. Imagine I construct a computer model of an amoeba, that includes the dynamics of every molecule and every electron in it. I can be confident that the output of this model will perfectly match the behaviour of the amoeba. So there is no point in wasting computer-time simulating that model; I already know what the results will be, and it will teach me precisely nothing about the amoeba.

If I want to learn how an amoeba (or anything) works, by theoretical modelling, I need to leave things out of the model. Only then will I discover whether those features were important and, if so, for what.

When I was a physics undergraduate, I remember once explaining Galileo's famous experiment to a classicist friend; the (possibly apocryphal) one where he dropped large and small stones from the leaning tower of Pisa, to demonstrate that gravity applies the same acceleration to all bodies. "But a stone falls faster than a feather," she protested. I said that was just because of the air resistance, so the demonstration would work perfectly if you could take the air away. "But you can't," she pointed out. "The theory's pointless if it doesn't apply to the real world. So Galileo was wrong." I have a strong suspicion that she was just trying to wind me up - and succeeding. The point, which she probably appreciated really, is that the idealized scenario teaches us about gravity, and we can't hope to understand the effects of gravity-plus-air before we understand gravity alone.

Similarly, if Newton had acknowledged that no object has ever found itself perfectly free of any unbalanced force, he would never have formulated his first law of motion. If Schroedinger had fretted that an electron and proton cannot be fully isolated from all external influences, he would have failed to solve the structure of the hydrogen atom and establish the fundamentals of quantum mechanics. The simplicity of the laws of nature can only be investigated by idealized models (like the one-dimensional "fluid" below), before adding the bells and whistles of more realistic scenarios.

With increasing research emphasis on throwing massive experimental and computational power at chemically complex biophysical and nanotechnological systems, and in the face of financial pressure to follow applications-led research, it would be easy to forget the importance of developing idealized models, elegant enough to deduce general principles that transcend any one specific application. So let's adiabatically raise a semi-infinite glass of (let's assume) milk, and drink to the health of the spherical cow.

Tuesday, 23 December 2014

What are exams for? On measuring ability and disability

(First published on physicsfocus)

Equal rights
I'm going to go out on a limb here and assert that equality of opportunity is a good thing. There, I've said it. Gone are the bad old days when jobs and privileges were determined at birth. No longer do you have to be an aristocrat or wealthy land-owner to study science; Michael Faraday broke that mould. Neither is being born with a Y-chromosome still a prerequisite for academic success. While that playing field may not be as level as it should be, at least officially-sanctioned sexism has been abolished since Rosalind Franklin's day. Encouragingly, I currently teach a cohort of undergraduate mathematicians, at Leeds University, with a near-equal female:male gender ratio of 52:48.

Belatedly, we have seen improvements in equality of opportunity for people with disabilities. An inspiring leap forward was made by the London 2012 Paralympics in dispelling some of our social prejudices. Meanwhile, with the introduction of the Equality Act 2010, educational establishments have set up new procedures to ensure that disability does not result in inequality.

For a simple and obvious example, consider a physics undergraduate student who uses a wheelchair. Their inability to walk has no bearing on their potential quality as a physicist. So their university has a responsibility to make sure that they are not disadvantaged during their learning and assessment. It would be unfair to arrange their exams to take place at the top of a steep flight of stairs. Their institution needs to be aware of their condition and make sure that they can access the exam.

Similarly, universities must make exams accessible to blind or partially-sighted students by printing their exam papers in Braille or large print. It is obvious that poor eyesight should not prevent a person being a good physicist. So we lecturers and examiners must make sure that our formal assessments of a physicist's abilities reflect only those abilities relevant to being a physicist, while taking appropriate account of a candidate's medical conditions.

A student with a disability can visit a university's Equality and Diversity Unit to have their needs assessed by a qualified professional, who will write a formal Assessment of Needs: a document that is circulated to their teachers, explaining what special provisions are required to prevent the student being disadvantaged by their condition. So a student with hearing difficulty might have an Assessment of Needs containing a statement such as, "Lectures should be arranged in a room with a hearing loop." It makes sense, and can be very helpful.

Learning equality
Things become a lot more complicated where a specific learning difficulty (SpLD) is involved because, whereas hearing or walking are not crucial abilities for STEM subjects (Science, Technology, Engineering, Mathematics), learning is a university's core business. The Equality Service at the University of Leeds has useful information about SpLDs. It says,

"Each SpLD is characterised by an unusual skills profile. This often leads to difficulties with academic tasks, despite having average or above average intelligence or general ability."

This makes a thought-provoking distinction between ability with academic tasks and intelligence. It presupposes particular definitions of "intelligence" and "academic". I don't know how to define either of those things, but it seems safe to say that the particular type of intelligence that is relevant to university work could be called "academic intelligence".

When learning is itself the subject of an Assessment of Needs, as it is for people with Asperger syndrome or dyscalculia, for two examples, then the assessor's own academic background becomes relevant. Assessors and staff of Equality and Diversity Units often have medical or humanities training. (I confess this is an anecdotal observation, not based on good data.) So their views of STEM-subject exams are not based on experience. Yet they and other medical professionals are required to write Assessments of Needs that carry the weight of law, and dictate some parameters of the teaching and assessment delivered by the subject-specialists.

For instance, while good writing style is deemed relevant to an English degree, physics examiners are often instructed not to mark a particular student's work on the basis of their grammar. The assumption is that ability to write good English is not part of the discipline of physics, and can be separated from it as easily as the ability to walk or to hear. The instruction assumes that the exam should only test the student's ability to calculate or recall facts, rather than a holistic ability to understand an English description, translate it into a calculation, solve the problem, interpret the solution and communicate it well. Of course, no examiner would mark a physicist's work exclusively on their writing style, so we are only talking about a handful of marks at stake.

Of necessity, when the 2010 Equality Act became law, new systems were hastily put in place, without much time for consultation. As a consequence, examiners were never asked, for instance, whether we should expect a physicist to demonstrate good communication ability. As we iron out the system's early teething troubles, we need to address these kinds of question. What exactly is an exam supposed to test? To what extent can we separate our assessment of a physics student's linguistic ability from their other skills? This is not a rhetorical question. I don't know the answer, but I do know that it is complicated and not obvious, and should be debated before the rules are set in stone.

Here is a cartoon that brings the issue into sharp focus.
Cartoon courtesy of QuickMeme www.quickmeme.com/p/3vpax2
It all hinges on what the selection is for. If this is the exam for a swimming qualification, it is entirely unsuitable. If it is a job interview for a steeplejack, then it's a fair test that discriminates appropriately between the best and the worst. It is easy to define the appropriate skills for a steeplejack. How should we define a scientist?

A matter of time
To avoid any misunderstanding, I want to make a clear distinction between teaching and examination. Any good teacher needs to have empathy for their students, and pitch their teaching at a suitable level and tempo for each individual. Being armed with the maximum possible information about the student's particular needs and abilities is always helpful, and the good teacher will make appropriate provisions whenever possible. The extent to which special provisions should be made during exams is an entirely separate question.

The most common provision in an Assessment of Needs is the stipulation that a particular student should be given extra time (typically 25% extra) to complete their exam. This raises a fundamental question. If a person can solve a particular puzzle more quickly that another person, although both might get there in the end, should their university award them a higher grade? A person's intellectual ability cannot be quantified on a single, one-dimensional scale. It is many-faceted, and speed of problem-solving is one aspect of it.

One might suggest that exams do not exist to test a person's intellectual ability, but only to test how much they have learnt during a particular course of study. That sounds like a reasonable idea but, in fact, we do not measure a student's scientific ability at the beginning and end of a science degree course, and award the qualification for self-improvement, irrespective of whether they are any good at the subject. On the contrary, the letters "BSc" are purported to be a standardized benchmark, indicating a particular absolute level of ability. That ability might have been innate when the student arrived at university, or might be the result of more-than-average hard work.

The most common method for determining ability in any academic subject is by timed exams. An exam tests what a student can achieve within a finite time interval. At the end of that interval, anyone who has not finished misses out on the marks that they were too slow to accrue. An exception is made if a medical professional has predicted, rightly or wrongly, that the student would need extra time for their exams, and has written their prediction in an Assessment of Needs. This inconsistency presents a problem. A more accurate and individually-tailored assessment of each student's needs could be made in the exam hall. We could unambiguously identify the students who need extra time, as a result of their own unique abilities and disabilities. They are the ones who run out of time!

So there would be advantages (as well as massive logistical difficulties) in having un-timed exams. They would allow each student to demonstrate their abilities, whilst removing the element of speed from the assessment of their expertise. With the best intentions, we have stumbled into the new age of equality with a flawed mixture of two systems. Candidates, whose needs have not been assessed, have their abilities measured by a fixed-duration exam, while others have the duration of their exam determined by their abilities.

The question that must urgently be addressed is this. Do we want exams to test what a candidate is able to achieve within a fixed time, or do we only want to know what they can achieve when given as much time as they require? Creating fair and meaningful methods of assessment requires an open debate on what we want from an exam, what we want from a degree classification, and what we want from a physicist.