CHAPTER EIGHT
When I see a spade, I call it a spade.
OSCAR WILDE
The importance of being Earnest
When Cecily Cardew confronted her rival for the hand of Ernest Worthing in Wilde’s famous duel over the tea cups, she declared that when she sees a spade, she calls it a spade. Cecily might have been less uncompromising and confident had she been a subscriber to the Journal of Personality and read the paper by J.S. Bruner and Leo Postman, published in 1949.
(1) The two psychologists wanted to discover exactly how people perceive and understand visual symbols. To help them, they designed and made some special packs of playing cards. Some cards were quite normal, but others were altered in subtle ways; for instance the six of spades was turned into a red card and the four of hearts was made black. The anomalous cards were mixed with normal cards, the pack was shuffled and test subjects were shown the cards one at a time.
At first, people were shown the test card for merely a short glimpse. Gradually they were shown each test card for a longer and longer time until they were able to recognise and identify it to the researchers. Even with only a short glimpse, most people were able to identify all the cards shown. But the extraordinary finding was that the anomalous cards were always identified as being normal without any hesitation or puzzlement.
The people looking at them actually saw the black four of hearts as either a four of spades or as a normal four of hearts. Their perceptions were simply fitted naturally into the conceptual categories that had already been prepared by their previous experience of playing cards.
When the experimenters increased the amount of exposure to each anomalous card, people began to become aware of something wrong. One subject shown the red six of spades said: ‘that’s the six of spades but there’s something wrong with it the black has a red border’.
Further increases in exposure time made subjects even more confused and hesitant, until most people finally ‘saw’ what was really before their eyes. Most interesting of all, however, was that more than 10 per cent of the anomalous playing cards were never correctly identified even when exposed for forty times the average exposure needed to recognise normal cards. And many of the people taking part in the test experienced acute personal distress. One person remarked to the experimenters, ‘I can’t make the suit out, whatever it is. It didn’t even look like a card that time. I don’t know what colour it is now or whether it’s a spade or a heart. I’m not even sure what a spade looks like.
My God!’
Even the experimenters themselves, who knew every card in the phoney deck, were disturbed by viewing them. Postman told a colleague that, ‘though knowing all about the apparatus and display in advance, he nevertheless found looking at the incongruous cards acutely uncomfortable.’
(2) This mental discomfort, and our attempt to avoid it, extends beyond perception of the mere physical symbols themselves and embraces the meaning or significance of those symbols. At around the time that Bruner and Postman were asking questions about how we perceive things, Leon Festinger and his colleagues at Stanford University were formulating a theory about how we believe things and how those beliefs affect our behaviour. Festinger proposed the theory of cognitive dissonance that we all strive to keep a sense of consistency between the things that we think we know and we resist any new information that causes dissonance between our beliefs, or we strive to reduce that dissonance.
(3) The kind of studies on which Festinger based his theory of cognitive dissonance seem rather obvious when looked at in a commonsense way. But they actually expose an important component of our thought processes (or, if you prefer, of our behavioural processes) that is normally invisible to us.
Take, for example, the survey poll carried out in Minnesota in which 585 people were asked, ‘Do you think the relationship between cigarette smoking and lung cancer is proven or not proven?’ The poll Showed that the attitude of smokers and nonsmokers to this question differed sharply. Among nonsmokers, 29 per cent thought the link was proved and 55 per cent thought it not proved. Those who smoked heavily held very different views. Only 7 per cent of heavy smokers thought the link proved and a whopping 86 per cent thought it not proved.
The important question here is not the factual scientific question of who is right and who is wrong. It is why should smokers hold such a strongly different belief from nonsmokers? The answer that Festinger gives is that the smokers are acting to reduce their level of cognitive dissonance by denying the link despite considerable medical evidence. Knowing they smoke and accepting the medical evidence would create a distressing inconsistency in their beliefs. The simplest way to reduce that distress is to deny the new information.
Festinger generalised his theory to explain how people will tend to reduce cognitive dissonance that stems from social disagreement. The greater the magnitude of the dissonance the more strenuous the efforts to reduce it. Festinger identified three mechanisms that we may use to try to reduce dissonance that stems from such disagreement.
The first and most obvious is to change our own opinion so that it corresponds more closely with our knowledge of what others believe. This explains the widespread phenomenon of the group viewpoint or the tendency of any group of people to wish to achieve a consensus viewpoint.
The second way is to try to apply pressure to those people who disagree to alter their opinion. This is an equally common phenomenon and one that explains just why some individuals are willing to go to such strenuous lengths to try to make others think as they think.
The third method, according to Festinger, is equally easily recognised:
Another way of reducing dissonance between one’s own opinion and the knowledge that someone else holds a different opinion is to make the other person, in some manner, not comparable to oneself. Such an allegation can take a number of forms. One can attribute different characteristics, experiences or motives to the other person or one can even reject him and derogate him. Thus if some other person claims the grass is brown when I see it as green, the dissonance thus created can be effectively reduced if the characteristic of being colourblind can be attributed to the other person. There would be no dissonance between knowing the grass is green and knowing that a colourblind person asserted it was brown.
There is substantial experimental evidence to support this view. Schacter set up a complex series of experiments involving people brought together in ‘clubs’ to discuss how best to deal with young criminals. Without the knowledge of the test subjects, paid participants always adopted certain attitudes in the club debates that followed. One paid participant always agreed with the meeting, another always disagreed, saying for example that juvenile offenders should be harshly punished. The study found that people who persistently disagreed with the group’s view were consistently derogated by the group and there was a move to exclude these people from future meetings of the ‘club’. Even more interesting, half of the ‘clubs’ were made to seem very attractive to the participants, while the other half were made to seem considerably less attractive. The extent to which members derogated and wished to ostracise the person who disagreed with them was far higher in the attractive clubs than in the less attractive clubs.
(4) But, of course, not everyone reacts in the same way to learning new information that contradicts their existing beliefs. Festinger concluded that:
For some people, dissonance is an extremely painful and intolerable thing, while there are others who seem to be able to tolerate a large amount of dissonance. This variation in ‘tolerance for dissonance’ would seem to be measurable at least in a rough way. Persons with low tolerance for dissonance should show more discomfort in the presence of dissonance and should manifest greater efforts to reduce dissonance than persons who have high tolerance.
At this point many readers will feel like suggesting that perhaps such a test already exists, having recognised a certain similarity between our discussion immediately above and some descriptions of ‘authoritarian personalities’ and some descriptions of people with high ‘intolerance for ambiguity’. My own suspicion would be that
existing tests such as the F scale do measure, to some extent, the degree to which people hold extreme opinions, that is, opinions where dissonance has been effectively eliminated.
(5) The authoritarian personality with low tolerance for dissonance and who readily adopts the device of derogating others is one that we have already met in previous chapters and will be meeting in a number of guises in later ones. The F scale referred to by Festinger is a measure of authoritarian tendencies, devised by American researchers to try to measure an individual’s predisposition towards fascism. This is also examined in more detail later.
There have, of course, been many such basic findings about perception in experimental psychology over the past fifty years or more. The question is, what if anything do they show in the case of scientific discovery? One scientist who concluded that they show a great deal was Thomas Kuhn of Berkeley University, California, who originated the first comprehensive theory of how scientific revolutions come about.
(6) In his book The Structure of Scientific Revolutions, Kuhn popularised the now widely accepted idea of the scientific ‘paradigm’; universally recognised scientific achievements that for a time provide model problems and solutions to a community of scientists engaged in those and related problems.
‘In science,’ says Kuhn, ‘as in the playing card experiment, novelty emerges only with difficulty, manifested by resistance, against a background provided by expectation. Initially only the anticipated and usual are experienced even under circumstances where anomaly is later to be observed.’
This idea is one that many people, including scientists, will find simply impossible to accept. Are we really being asked to believe that when scientist ‘A’ looks at an experimental result he sees one thing, but when scientist ‘B’ looks at the same experiment he sees something quite different, because of differences in their personality? Extraordinary though it may sound, that is exactly the conclusion that Kuhn and others have reached. And the evidence from the history of science is not merely persuasive, it is overwhelming.
One of the most interesting examples that Kuhn cites is Sir William Herschel’s discovery of the planet Uranus the first planet to be discovered since prehistoric times. This is interesting not merely because it shows the ‘playing card’ syndrome in action, but because it also triggered what Kuhn has called a paradigm shift in the branch of science concerned.
On at least seventeen occasions between the years 1690 and 1781, a number of astronomers, including some of Europe’s most influential observers, had seen a ‘star’ in positions that we now know to have been that of Uranus. One astronomer had even observed the object for four nights in a row in 1769 but without noticing the motion that would have disclosed it as a planet not a star.
When Herschel first observed the same object twelve years later he was able to examine it with a much better telescope that he had himself designed and built. Herschel saw that the object appeared to have a disc shape something not characteristic of stars because they are too far away to be resolved. Herschel thus put a question mark against the nature of the object the first person to do so. When he observed it further, Herschel saw that the object had a real motion with respect to the Earth. He therefore concluded that he was looking at a comet!
Several months were spent trying to fit the new ‘comet’ to a suitable cometary orbit, until Lexell suggested that the orbit was probably planetary. Once the suggestion had been made, it was at once seen to be obvious. As Kuhn put it, ‘A celestial body that had been observed off and on for almost a century was seen differently after 1781 because, like an anomalous playing card, it could no longer be fitted to the perceptual categories (star or comet) provided by the paradigm that had previously prevailed.’
Kuhn points out that the discovery of Uranus did more for astronomy than merely add another planet to the solar system. It prepared astronomers to perceive other such objects, and after 1801 they did indeed begin to see numerous minor planets and asteroids. No fewer than twenty such planetary bodies were discovered by astronomers using standard instrumentation in the first fifty years of the nineteenth century.
This failure simply to see what is before our eyes is far from rare. In the 1890s, scientists all over Europe were experimenting with cathode rays electrons accelerated in a partially evacuated tube by an electric charge. Researchers trying to tease out the secrets of cathode rays included great names such as Lord Kelvin, who had contributed to the mathematical foundations of electricity and magnetism including the electromagnetic theory of light.
One of these hopeful experimenters was the young Wilhelm Roentgen, working at the University of Wurtzburg in 1895. One day, Roentgen noticed that a screen near his shielded cathoderay apparatus glowed when the cathode rays discharged. Roentgen locked himself in his laboratory virtually night and day for many days before emerging to announce the discovery of Xrays. By the time he unlocked his laboratory door he had discovered that the new rays travelled in straight lines, that they cast shadows and could not be deflected by a magnet.
When Roentgen announced his discovery it was greeted with surprise and with shock. Lord Kelvin pronounced Xrays an elaborate hoax.
(7) Other scientists, though they felt bound to accept the physical results, were staggered by the discovery. Yet, as Kuhn points out, the discovery of Xrays was, ‘not at least for a decade after the event, implicated in any obvious upheaval in scientific theory’:
To be sure, the paradigm subscribed to by Roentgen and his contemporaries could not have been used to predict Xrays. (Maxwell’s electromagnetic theory had not yet been accepted everywhere, and the particulate theory of cathode rays was only one of several current speculations.) But neither did those paradigms, at least in any obvious sense, prohibit the existence of Xrays…. On the contrary, in 1895, accepted scientific theory and practice admitted a number of forms of radiation visible, infrared and ultraviolet. Why could not Xrays have been accepted as just one more form of a well-known class of natural phenomena? Why were they not, for example, received in the same way as the discovery of an additional chemical element? New elements to fill empty places in the periodic table were still being sought and found in Roentgen’s day. Their pursuit was a standard project for normal science, and success was an occasion only for congratulations, not for surprise.
(8) There can be little doubt that many European scientific laboratories must have been producing Xrays on a substantial scale yet no one had perceived them. Anyone who thinks that this is merely a case of people ‘not noticing’ the new rays should remind themselves that Britain’s most eminent physical scientist, Lord Kelvin, declared them to be a hoax. There is more to this than not noticing.
Interestingly, at least one other eminent scientist was on the track of Xrays: Sir William Crookes, who had been alerted by some photographic plates that had become unaccountably fogged while covered up. Crookes’s exceptional openness to the possibility of a new form of radiation may perhaps be connected with his high tolerance to dissonant ideas a trait which he demonstrated repeatedly in his later researches.
In many of the examples given so far, we are looking back in time and examining cases of fundamental scientific importance. But how does this strange phenomenon affect ordinary working scientists today? The answer is that it affects them in exactly the same way that it affected Roentgen and Lord Kelvin. Scientists at Oak Ridge, Los Alamos, Stanford University, US Naval Laboratory and Texas A & M University have built FleischmannPons cold fusion cells and they have perceived gammarays, tritium and excess heat energy. Scientists at Harwell and MIT have built FleischmannPons cells and have gone on record as saying that they do not see such results one eminent MIT scientist even claiming, in the best traditions of Kelvin, that cold fusion is a hoax.
Researchers at Stanford Research Institute, Birkbeck College and King’s College say they have perceived (and, indeed, filmed and recorded) people producing readings on electrical instruments remotely without touching them, by means that are inexplicable. Researchers at other institutions say they have been unable to perceive or record such things and that the results must be conjuring tricks.
Most extraordinary of all, we have cases where the same scientist says that he perceives paranormal phenomena on one occasion, but is unable to see the same phenomena produced by the same individual on a later occasion as in the case of Dr John Taylor and Uri Geller. No one could accuse Dr Taylor of not being openminded on the subject. Quite the contrary, he has risked his reputation with a courage and pioneering spirit that has left most of his colleagues gasping. What strange force is it then, that can cause even the most fearless and objective of researchers to undergo such dramatic changes in perception?
The failure to ‘see’ experimental results sometimes comes about because our expectations direct our attention to the wrong place. Otto Hahn and his colleague Fritz Strassman are famous for their experimental work that led to the discovery that uranium atoms could split apart, turning into other elements the basis of all nuclear fission discoveries. But after five years’ hard work in the 1930s looking for experimental evidence of this process they almost missed it entirely because they were looking for the wrong fission products. As uranium is a very heavy element, they expected the uranium atom to break into other heavy elements, such as radium, thorium and actinium. Actually what they should have been looking for chemically were light elements from the other end of the periodic table: barium and krypton.
The gas krypton was not identified by chemical means until the fission reaction was already well understood, and the second main fission product, barium, was discovered merely by chance because the researchers were adding barium to their radioactive solutions to try to precipitate the heavy elements they were looking for. When they found more barium than they were putting in themselves, they realised something strange was happening.
Hahn himself appears to have suspected that some unknown influence was at work when he wrote:
As chemists we should be led by this research . . . to change all the names in the preceding {chemical reactions} and thus write barium, lanthanum and cerium, instead of radium, actinium, thorium. But as ‘nuclear chemists’ with close affiliations to physics, we cannot bring ourselves to this leap which would contradict all precious experience of nuclear physics. It may be that a strange series of accidents renders our results deceptive.
(9) It is interesting to compare Otto Hahn’s comments above (‘this leap which would contradict all previous experience of nuclear physics’) with those of Paul Henri Rebut, director of fusion research at Culham, commenting on Fleischmann and Pons’s discovery of cold fusion: ‘To accept their claims one would have to unlearn all the physics we have learnt in the last century.’ Hahn decided to risk the ‘contradiction’ and, as a result, discovered nuclear fission.
When we go back again to the psychology laboratory seeking further enlightenment on the nature of the ‘strange series of accidents’ of which Hahn wrote, we find further experiments suggesting that it is not merely our perception of the contents of a testtube that can change, but our whole world view. It was as long ago as 1897 that George Stratton first performed an experiment that has become familiar today. An individual who is fitted with a pair of goggles containing inverting lenses is at first completely disoriented by the unaccustomed view of an upsidedown world. But after the subject has learned to deal with his new view of the world his entire visual field adjusts itself to the inverted input. After a period of confusion, the subject sees the world ‘right way up’ again.
(10) ‘Literally as well as metaphorically,’ observes Thomas Kuhn, ‘the man accustomed to inverting lenses has undergone a revolutionary transformation of vision.’ In the jargon of experimental psychology, he has experienced a ‘Gestalt’ switch.
Often it is easier for a scientist from a different field a different world as it were to see and understand the implications of experimental results. This was the case with John Dalton, originator of the chemical atomic theory. Surprisingly, Dalton was not a chemist and had no special interest in chemistry at first. He was a meteorologist who wanted to understand weather patterns and who concluded that to do so he must familiarise himself with the way in which gases mix and are absorbed by water. He therefore approached the problem with a paradigm very different from that of his contemporaries who were physical chemists. The ruling paradigm for men like Berthollet and GuyLussac, Richter and Proust, was that chemicals had a certain affinity for one another.
To Dalton, the pragmatic weather investigator, the mixing of gases and liquids were simply physical processes in which affinity played no part. Thus Dalton took the fact already known that some chemical compounds contained fixed proportions of substances and merely generalised it to include all compounds.
Dalton’s conclusions were widely attacked by chemists, especially Berthollet who never accepted the atomic nature of chemical elements. But the new generation of chemists, not committed to the old paradigm, was more receptive.
According to Thomas Kuhn:
What chemists took from Dalton was not new experimental laws but a new way of practising chemistry (he himself called it the ‘new system of chemical philosophy’), and this proved so rapidly fruitful that only a few of the older chemists in France and Britain were able to resist it. As a result chemists came to live in a world where reactions behaved quite differently from the way they had before.
The human phenomenon we are dealing with though rather worrying and disturbing to our normal world view has so far still been dealt with in a rational, reductionist kind of way. Test subjects misperceived anomalous playing cards in much the same sort of way that rats learn to navigate mazes and dogs salivate when they hear their food bell. But is there anything really important in all this? Is changing the colour of playing cards merely an amusing trick, or does it tell us something more important, more fundamental about the way in which we see and understand the world? Kuhn concluded that it does.
Consider the paradigm shift that occurred in the late Middle Ages when Galileo’s view of the pendulum replaced that of Aristotle’s. To Aristotle and his contemporaries, heavy bodies possessed a tendency to move by their own nature from higher positions to a state of natural rest at a lower position. Thus they considered that a weight on the end of a chain was merely a weight that was being prevented from falling properly, and achieved its state of rest only after tortuous motions attempting to gain the lowest position. Galileo, says Kuhn, looked at the swinging body and saw a pendulum, a body that almost succeeded in repeating the same motion over and over again ad infinitum.
‘Having seen that much,’ he says, ‘Galileo observed other properties of the pendulum as well and constructed many of the most significant and original parts of his new dynamics around them.’
It was from the pendulum, for instance, that Galileo got his only really sound argument to support his view that the rate at which bodies fell was independent of their weight (the story of him dropping cannon balls from the top of the leaning tower of Pisa being, sadly, apocryphal). All these things, Galileo saw for the first time, even though such observations had been made for thousands of years.
Importantly, points out Kuhn, the change did not occur because Galileo was able to make more accurate measurements, or because Galileo was more ‘objective’. On the contrary, the Aristotelian description of the pendulum is just as accurate.
Galileo’s individual genius is, of course, a key factor in the discovery. But it was not a genius for measurement but a genius for perception. And, interestingly, Galileo had not been educated entirely in the traditions of Aristotle, but had also been exposed to a medieval paradigm of which little trace remains today except the word ‘impetus’. Fourteenthcentury scholars Jean Buridan and Nicole Oresme formulated the theory that the continuing motion of a heavy body is due to an internal power implanted in it (impetus) by the projector that initiated its motion. Oresme wrote an analysis of a swinging stone in what now appears as the first discussion of the pendulum.
Oresme’s view, says Kuhn, ‘is clearly very close to the one with which Galileo first approached the pendulum. At least in Oresme’s case, and almost certainly in Galileo’s as well, it we: view made possible by the transition from the original Aristotelian to the scholastic impetus paradigm for motion. Until that scholastic paradigm was invented, there were no pendulums, but only swinging stones, for the scientist to see. Pendulums were brought into existence by something very like a paradigminduced Gestalt switch.’
So here we have evidence of a real change in world view taking place and caused by a change of paradigm. But Kuhn has been saving up a much more worrying question for us.
Do we, however, really need to describe what separates Galileo from Aristotle, or Lavoisier from Priestley, as a transformation of vision? Did these men really see different things when looking at the same sorts of objects? Those questions can no longer be postponed, for there is obviously another and far more usual way to describe all of the historical examples outlined above. Many readers will surely want to say that what changes with a paradigm is only the scientist’s interpretation of observations that are themselves fixed once and for all by the nature of the environment and of the perceptual apparatus. On this view, Priestley and Lavoisier both saw oxygen, but they interpreted their observations differently; Aristotle and Galileo both saw pendulums, but they differed in their interpretations of what they both had seen.
Let me say at once that this very usual view of what occurs when scientists change their minds about fundamental matters can be neither all wrong nor a mere mistake. Rather it is an essential part of a philosophical paradigm initiated by Descartes and developed at the same time as Newtonian dynamics. That paradigm has served both science and philosophy well. Its exploitation, like dynamics itself, has been fruitful of a fundamental understanding that could perhaps not have been achieved in another way. But as the example of Newtonian dynamics also indicates, even the most striking past success provides no guarantee that crisis can be indefinitely postponed. Today research in all parts of philosophy, psychology, linguistics, and even art history, all converge to suggest that the traditional paradigm is somehow askew. That failure to fit is also made increasingly apparent by the historical study of science to which most of our attention is necessarily directed here.
The important point here is that what happens during a scientific revolution cannot be reduced simply to a reinterpretation of individual data that remain stable before and after that revolution. As Kuhn points out, a pendulum really is not a falling stone. Oxygen really is not ‘dephlogisticated air’ (as some before Lavoisier thought). So the data that scientists collect from their observations actually are different.
‘More important,’ he concludes, ‘the process by which either the individual or the community makes the transition from constrained fall to the pendulum, or from dephlogisticated air to oxygen is not one that resembles interpretation. How could it do so in the absence of fixed data for the scientist to interpret? Rather than being an interpreter, the scientist who embraces a new paradigm is like a man wearing inverting lenses. Confronting the same constellation of objects as before and knowing that he does so, he nevertheless finds them transformed through and through in many of their details.’
The research work reviewed briefly here seems to me to point to a single unequivocal conclusion: that the human mind plays an active role in the process of perception. The mind is no mere passive mirror reflecting external events. It does not merely represent data in the way that a computer monitor does on a ‘dot for dot’ basis. Instead it contributes something to the sensory information presented to it. The something that it contributes comes from our existing experience, and the nature and meaning of our existing experience includes the consensus view that we strive to reach to reduce cognitive dissonance to a minimum.Put at its simplest, what we perceive when we make our observations depends at least in part on what we already believe is there. This in itself carries the disturbing implication that any form of scientific research may be susceptible to an exceptionally subtle form of systematic bias. But there are strong indications from other research that there may be even more powerful forces at work distorting our observations.
Latest Articles
Motivational Inspirational Speaker
Motivational, inspirational, empowering compelling 'infotainment' which leaves the audience amazed, mesmerized, motivated, enthusiastic, revitalised and with a much improved positive mental attitude, state of mind & self-belief.