Several years ago I was returning graded midterms to a calculus class. One particularly bright student approached me afterward to ask why one of his solutions had been marked wrong. The problem in question went something like this: “ is a function such that [insert the appropriate list of properties here]. Show that
.”
This student used a formula that he had learned in class to produce the desired result: 5. Unfortunately, the formula that he used didn’t apply to this particular problem. He produced the “answer”, but did so with faulty reasoning.
My student was skeptical of that explanation. After all, his reasoning had led to the correct answer: didn’t that mean that his solution was valid? He eventually gave up pleading his case, but was never convinced that he hadn’t solved the problem.
A similar incident occurred when I was an undergraduate taking calculus. I was working on what appeared to be a straightforward homework problem and checked my answer against the answer in the back of the textbook. They didn’t match. Other classmates of mine had the same issue. Together, we spent a bit too much time trying to get our answers to be equal to the answer the textbook. I eventually finagled my solution into submission and convinced myself that I had solved the problem. When we brought up this up to our professor, he offered the explanation that should have been obvious all along: the answer in the back of the book was wrong; we were all right before we looked up the answer.
Both my former student and I had the requisite knowledge to understand the solutions to our problems. But neither of us could overcome our presuppositions, at least not right away. We were too focused on getting the “answers” that we already believed in than we were with thinking logically.
I am not the first person to observe that humans tend to generate post hoc justifications for previously held beliefs (even beliefs about math problems). Cognitive psychologists have been studying this for decades, and that is where we will now turn our attention.
Wason’s four-card task
In the 1960’s cognitive psychologist P. C. Wason (coiner of the term “confirmation bias”) developed an ingenious experiment to study how people reason. The basic task is as follows.
Four cards are placed in front of you. You know that each card has a letter on one side and a number on the other, but for now you can only see one side of each card. Your four cards say D, B, 3, and 7. You must determine which cards you need to turn over to test whether the following proposition is true: if there is a D on one side of the card, then there is a 3 on the other side of the card.

You probably knew right away that you have to turn over the D card to be sure that there is a 3 on the other side. You also likely realized that you don’t need to turn over the B, because that card could not invalidate the proposition no matter what number is on the other side.
It is trickier to deal with the 3 and 7 cards. Many people choose to turn over the 3, but it doesn’t actually matter what is on the other side of the 3. A 3/D card would not invalidate the proposition, nor would a 3 with any other letter. Instead, you have to turn over the 7. This is because a 7/D card would invalidate the proposition.
Wason and his collaborators used this and similar tests to conduct several fascinating studies, but one in particular is relevant for this discussion.
Rationalization
In a 1976 paper , Wason and J. St. B. T. Evans describe an experiment in which they presented four groups of subjects with four different solutions to the four-card task. Subjects in all groups were told that the solution given to them was correct, but only one group was given the real correct solution (turn over D and 7). The other three groups were given incorrect solutions (either turn over D, 3, and 7; or turn over D and 3; or turn over only D).
Subjects were then asked to explain why the solution given to them was correct and rate their confidence in their reasoning. Remarkably, subjects who were shown incorrect solutions were, on average, more confident in their (incorrect) reasoning than subjects who provided reasons for the correct solution.
One interpretation of this result is that some subjects were not reasoning. They were rationalizing something that they believed to be true.
The subjects who so confidently believed their own reasons for turning over the 3 card were just as mistaken as me when I produced the answer in the back of my textbook. Both I and the test subjects were given incorrect answers, and we used our powers of “reasoning” to generate explanations that fit the incorrect answers. We had wrong answers and wrong reasons.
When my calculus professor told me that my textbook’s answer was incorrect, I knew that my reasoning was incorrect. After that, it was easy to correct the error. But note the sequence of events: my knowledge of the answer changed before I could spot my mistake.
My student’s error did not come with such an obvious consequence. His wrong reasoning had produced the right answer, so it was not obvious to him that he had made a mistake. This makes me wonder if the subjects that had the true correct answer in the Wason and Evans experiment believed it on a logical basis or not. Unfortunately, they don’t discuss this in their paper.
Obligatory Closing Joke
How did the student convince himself that ?
He rationalized.