*If mathematics is about being certain and precise, then how can probability be part of mathematics, because probability is about not being sure?*

Probabilities are all about measuring and quantifying uncertainty. But I think that students are often a bit confused about what this means. One thoughtful student began writing all her answers to probability questions using the $\approx$ symbol. When asked why she was doing this, she said, “Well, probabilities are just probabilities – they’re not exact”.

It struck me that there are a few different things that she might have meant by this. She might have meant that, when flipping a £1 coin, say, p(Heads) $\approx\frac{1}{2}$, because no coin toss in the real world can ever be perfectly balanced, with precisely equal probability of landing on either side. Any *real* coin, undergoing any *real* throw, will be at least a little bit biased one way or the other (Note 1). So, maybe the $\approx$ symbol is communicating this approximate feature. However, that would seem to apply to *all* real-world measurements, of any kind, since no measurement can be made with absolute precision. If we say that the diameter of the coin is 22.5 mm, this will have to be $\pm$ some margin of error. So, on this basis, all lengths (and, indeed all measurements) would have to use the $\approx$ symbol too, and she wasn't doing that.

Alternatively, the approximate aspect that the student was thinking about might have been the uncertainty of the outcome on *any single coin flip*. On a frequentist view, probabilities are about long-run averages of relative frequencies, not individual instances. Even if we knew for some hypothetical coin that p(Heads) were precisely equal to $\frac{1}{2}$, that wouldn't help us to predict on *any given flip *whether the coin would come down Heads or Tails. There is still uncertainty, so perhaps it was this uncertainty that the student was wishing to capture in her use of the $\approx$ symbol.

Although $\frac{1}{2}$ is exactly in the middle of the probability scale that runs from $0$ to $1$, in a sense it represents *maximum* uncertainty, since if the probability were to take *any other value* we would stand a better chance of being able to predict the outcome on a single throw of the coin. If p(Heads) were 0.6, we could bet on Heads, and we'd expect to be right more than half of the time; if p(Heads) were 0.4, we could bet on Tails, and we'd expect to be right more than half of the time. But with p(Heads) at precisely 0.5 no strategy in the long-run will enable us to predict outcomes with better than 50% accuracy.

It can be hard to help students see that an uncertain outcome does not necessarily imply an approximate probability. We may be able to state a perfectly precise probability for an event, but, unless that probability is $0$ or $1$, we will still have uncertainty over what outcome we will obtain in any particular instance. I think I have often skated over such issues when teaching probability, and inadvertently left students thinking that the topic of probability is all about guesswork and approximation (e.g., subjective probabilities, such as that a particular football team will win a particular match).

#### Inequalities

I have seen similar reactions from students to work on solving inequalities - it feels like it isn't proper mathematics, because we are not getting 'a definite answer'.

When we solve an equation like $2x+5=11$, we obtain an *exact* solution, $x=3$. We find that $x$ takes this one specific value, and no other, and that is that. But, when we solve an *inequality* like $2x+5>11$, we obtain a solution expressed as *another* inequality, $x>3$, and this may seem to students to be expressing some *uncertainty*, perhaps a bit like a probability. We've just replaced one vague inequality with another vague inequality; we still don’t exactly know what value $x$ takes! It might be $4$, it might be $3.1$, it might be $4$ $000$ $000$. There are infinitely many possibilities, just as there were before we began solving it, so it seems as though we have made little progress. "So we *still* don't know what $x$ is!" a student might complain.

'Solving an inequality' feels like a contradiction in terms. For the students, 'Solving' means 'Finding the answer'. They might concede to saying 'Or answers', perhaps for a quadratic equation, where they know that they haven't solved the equation until they've stated *all* the possible answers. Or, with simultaneous equations, where the values of *both* unknowns need to be found before someone can claim to have solved it. But here there are *infinitely many* possible answers, so we seem to know very little indeed about what the value of $x$ is!

However, infinitely many possible value have also been *ruled out*, so this *is* progress! We have eliminated all values of $x \le 3$. Before we began, $x$ could have been anywhere on the real number line; now we know that it can only be in the open interval to the right of $3$.

The fact that $x>3$ means that $x$ is "*definitely* more than *precisely* $3$" is, I think, sometimes not clear to students. They see inequalities as approximate because one way to think about them is that they capture uncertainty and tell us 'what $x$ might be'. This language of probability seems unfortunate here. If the solving of equations has been introduced to students through "I'm thinking of a number", and the student has to use the equation (like a 'clue') to figure out what the number is, then this may be problematic when we move to inequalities. The student has zero probability of being able to determine the teacher's secret number if the clue is 'just an inequality'.

Perhaps a better way to talk about this is in terms of *solution set*: all the values of $x$ that satisfy the equation or inequality. This way, we don't envisage that there is a single 'right answer', and we just unfortunately don't have enough information to determine it, since our single piece of information happens to be an inequality, which is 'imprecise' or 'vague'. Instead, we see our task as wanting to describe *all the possible values of* $x$ *that are consistent with the given information*. When we solve equations, that often turns out to be just one or two. With an inequality, we want to capture precisely those values that satisfy it. So $x>3$ is not saying that "$x$ is some number greater than 3, but we unfortunately don't know which number". Instead, we're saying that "the solution set is *all* of the numbers greater than 3 *and no others*".

I think this is the way I would deal with a problem I've sometimes seen, where a student writes something like $x>2$ and claims that this is correct. "No," you say. "The answer is that $x$ is greater than 3." And the student says, "Well, if the mystery number we're looking for is greater than 3, then it's certainly going to be greater than 2, so I'm right!" They think you can't mark them wrong for making a true statement about this 'mystery number'. Your answer may have pinned the number down slightly more tightly, by ruling out the numbers between 2 and 3, but $x>2$ is right too (in a way in which something like $x<2$ wouldn't be) (Note 2)!

The point is that we're not seeking *a single mystery number,* and trying to guess what it might be, but a solution set of *all the possible numbers*. The student's solution set $x>2$ contains a whole load of numbers that are less than or equal to 3, and these are not just unnecessary but *impossible*, so the student's solution set is the wrong one.

If we want to avoid these difficulties, then there is certainly more to solving linear inequalities than just "Solve it like an equation, but put the inequality sign instead of the equals sign!" But I think the idea of treating an unknown as a 'mystery number' perhaps has its problems when it comes to solving inequalities. We don't just want any old interval that definitely contains a certain mystery number; we want an interval that *doesn't contain* any numbers which the given inequality *rules out*. The language of *solution set* seems to make this much easier to talk about.

### Questions to reflect on

1. Have you encountered students having these kinds of questions/confusions?

2. How do you explain to students what is going on when they are solving inequalities?

### Notes

1. Interestingly, in practice, no matter what you do, it doesn't seem possible to create a significantly biased coin (Gelman & Nolan, 2002). (Of course, a double-headed coin would do the trick, though!)

2. This reminds me of a staffroom discussion about whether a student should receive most of the marks for obtaining a solution like $x<3$ to an inequality question to which the correct answer was $x>3$: "At least they got the right number; they just had the inequality sign the wrong way round" versus "They could hardly have been more wrong - the only possible answer that could have been *less* correct than this would have been $x \le 3$"!

**Reference**

Gelman, A., & Nolan, D. (2002). You can load a die, but you can't bias a coin. *The American Statistician, 56*(4), 308-311. https://doi.org/10.1198/000313002605 ($)

## No comments:

## Post a Comment