The Limits of Science

Ronald W. Dworkin

Winter 2019

In the modern world, science has become the ultimate guide for describing reality. It's easy to see the appeal. Science has a beautiful clarity and economy; its laws are straightforward and unchanging. It reveals the workings of the world around us with such calmness and exactness, and with such an appearance of impartiality, that we feel satisfied with its answers and seek nothing more.

Newtonian mechanics represent the nearest approach to this ideal of science ever achieved. Given the masses, positions, and motions of objects, their future positions and motions can be calculated with extraordinary precision. Sir Isaac Newton's method was a revolution. Before Newton, science was conducted in an altogether different way; investigators speculated rather than experimented. It was Newton who stripped objects of all but their most basic attributes — mass and density — and timed their fall, drawing conclusions from what he observed rather than from what he imagined. By reducing objects to a few measurable characteristics, he was able to discover the universal laws that governed the behavior of all objects.

An analogous revolution occurred in political thought around the same period. While ancient philosophers tried to define virtue, Thomas Hobbes, whose lifetime spanned Newton's early years, took the opposite approach. Stripping people of all but their most basic (and base) attributes — selfishness and vanity — he claimed to explain mankind's mechanics, as it were, and the structure of civilization. His rules of the social contract explained how the basic machine of society works, just as Newton's laws of motion explained how the machine of the universe works.

The scientific revolution has now entered a second phase. It has moved beyond the hard sciences and Hobbesian philosophy and become the unifying principle of many activities in daily life. Through the relatively new disciplines of psychology, neuroscience, human science, and social science, it has inserted itself into how people think and behave at the individual level, affecting everything from interpersonal relationships to psychological health to education. The scientific revolution permeates our lives, shaping our sense of reality and truth. But sometimes it does so in ways that result in sheer absurdity. This is because of flaws within the scientific method itself — in other words, at the scientific revolution's core. These flaws rarely show up in hard science, but they grow more obvious, and more problematic, as humanity takes the place of inanimate objects as the method's primary target. 

To better understand what has happened, it will help to take a brief look back at the scientific revolution's first phase.

NEWTON'S METHOD

In 1666, Isaac Newton was 23 years old and living in the English countryside when, according to legend, an apple fell while he was sitting under a tree. Lost in meditation, he wondered why an apple always falls to the ground and never sideways or upward. His reflections eventually led to his discovery of the law of gravity.

But the falling apple was only a fortuitous trigger. Newton's mind was on the sun, the moon, the stars, and the five planets visible to the naked eye in his time. Nicolaus Copernicus had already shown that the planets orbited the sun, but he assumed they did so in circles. Later, Johannes Kepler had demonstrated that the orbits of the planets follow an elliptical pattern; he was even able to give an exact timetable of planetary motions. But there were no mechanical laws to account for these events. This was the problem Newton was wrestling with when the apple fell. His laws of motion sought to explain the biggest things in the universe — like the planets — and not the little things in people's lives, like apples.

Newton's scientific method is the basis of almost all scientific inquiry today (and, as we shall see, most non-scientific inquiry too). Most children learn in school that the scientific method involves forming a hypothesis, testing it through experimentation, and then analyzing the results to verify or disprove the hypothesis. All this seems clear and benign, but problems lurk just below the surface. The method requires some assumptions that inherently limit how true the results can be.

First, the scientific method is one of intentional ignorance. To understand complex phenomena, the method demands that investigators focus on certain chosen details, isolate them, and leave out all the rest. Thus, willfully or unconsciously, investigators artificially limit themselves and reach conclusions by looking at only a small portion of the facts.

Second, in isolating such details, and supposing such isolation to be accurate, investigators suppose what is false. Because investigators do not work with all the facts, their conclusions about complex phenomena are also false. At best, a conclusion may apply under the narrowest conditions.

And third, the scientific method encourages investigators to transcend individual details that can be seen or felt, and to substitute generalizations that are convenient for thought but nothing more than phantoms. Investigators credit these phantoms with real existence.

The limits of this method are quickly understood in practice. I encounter the pitfalls every day as a physician. Take, for example, the simple act of measuring a patient's temperature. The scientific method tries to produce something exact and independent of human sensation. This is why doctors use a thermometer instead of just asking a patient if he feels warm or chilled. Science thinks it possible not only to feel, but to measure, how hot a body is; it assumes an absolute standard of hotness and coldness exists outside of ourselves. But what is temperature? Scientists have tried to define the term, calling it an "emergent property" of molecular motion, yet that phrase is too abstract to convey much information. Although temperature itself can be described in exact form (a number), the concept of temperature lacks clear meaning.

Thermometers do provide doctors with valuable information; there is a practical correlation between the number and the patient's state of health. But that number is not the same as the truth of the generality that is supposed to underlie the number. In fact, as a physician, I don't even need to use the word "temperature" when using a thermometer. I can just correlate the number on the thermometer with the patient's status and start treatment. 

The scientific method encourages doctors to find some exact and invariable understanding of temperature (by using a thermometer) while disregarding the human and the personal (asking whether the patient feels hot or cold). But doctors also recognize that discounting a patient's symptoms makes no sense. When a patient says he feels warm or cold, his statement — vague as it is — has some meaning. Although imperfect, it is actually more real and certainly more relevant than the vague concept of temperature as an "emergent property." This is why doctors use the scientific method only sometimes.

In medicine, the scientific method generates useful abstract concepts by studying thousands of bodies shorn of their attributes except for the handful being studied; those concepts are then applied to individual bodies in the form of diagnostic categories and treatments. The process works — sometimes — because the human body obviously has certain characteristics universal to all of us.

But the human mind is far more singular. Today's scientific revolutionaries — social scientists, human scientists, and psychologists — have embraced the scientific method wholeheartedly, but they have forgotten that generalizations and universal concepts have far less value when working with non-material subjects such as the mind.

ZOOMING IN

To understand the error in the "second phase" of the scientific revolution, imagine a man, in trying to understand an object, moving away from that object rather than toward it. Instead of handling the object and examining it on all sides, he pushes it into the distance so that all details of color and unevenness of surface disappear, and only the object's outline remains on the horizon. Because the object is now so smooth and uniform, the man thinks he has a clear understanding of it. This would be a delusion, of course, yet this is what professionals pushing the second phase of the scientific revolution argue: place people at a distance; siphon away all but a few of their individual attributes; create general, smooth, and uniform concepts from people's minds; and we will better understand them.

The reason this makes no sense is also the reason why the scientific revolution launched by Newton began in astronomy, rather than in medicine, psychology, or human science: The scientific method works best when applied to an area we know little about. We sit on a tiny speck in the universe and make extremely limited observations about stars, planets, and galaxies. We can talk about them only in the simplest terms. Where there may be curves or parabolas, we see only a tiny fraction of the path from one angle and so call it a line. Because astronomical facts are evident on such an enormous scale that we see only a small portion of them, our ignorance lets us believe we have found the ideal science — only rarely does anything arise to challenge it. The star at a distance really appears to be smooth and uniform, even though it's not. Conveniently for astronomers, their ignorance is not a matter of choice.

For similar reasons, physics and chemistry are the next most perfect sciences. Their scales are so tiny that we can't see most of the details, only general effects here and there. For example, when chemists mix substances, the result is sometimes a new color or a precipitate. The precise movements of all the molecules in the mixture are unknown to chemists, just as the precise movements of all the stars in the universe are unknown to astronomers, yet certain observable changes do occur. Chemists single them out as the main phenomena, when in fact a lot more is involved. By focusing on just a few facts and dismissing countless others, chemists are able to arrange them in some order, generalize about them, and convey the sense of an ideal science the way astronomers do. It is no coincidence that the most perfect equations in chemistry involve gases, which are often invisible and the least amenable to detailed description.

The closer we get to our subject and the more we know, however, the more the scientific method breaks down. An astronomer can feel comfortable calling a faraway star's path a line, even though it may curve out there at the edge of the universe; he can assume the scientific method has revealed the truth, and it will likely never be disproven. But as a doctor, I can't focus on a few facts to the exclusion of others, for life is the level on which I work. In the operating room, I see people react differently to anesthesia all the time; I see lines become curves. I see a patient's facial expression convey more than a supposedly objective measurement. I see the chaos of a dappled skin pattern convey more accurate information than what the scientific method has built out of carefully isolated details.

And though there is a great deal of variety in how human bodies react, it is nothing compared to the variety and unpredictability of human behavior. This is the level on which social scientists, human scientists, and psychologists work, and, unlike faraway stars, human life is something that we know a lot about. For every one observation made about stars, poets and philosophers have made millions about people's habits, behaviors, and feelings. All people, expertly trained and uneducated alike, are intimately familiar with life. This is why the scientific method works so poorly on the level of life. Compared to astronomy, we see so much more. We know so many more details, and therefore we can watch the scientific method go wrong.

Even the most perfect concepts in the hard sciences are unreal. For example, Newton's concept of absolute motion yielded a mathematical formula for planetary movement under certain conditions. Yet that mathematical formula does not deal with actual facts; it deals with a mental supposition, that there is one body moving alone in absolute space — a case that has never occurred and can never occur. The path it predicts is unreal. True, the formula is very accurate, and it lets scientists predict celestial events, but previous formulas also foretold astronomical events with some accuracy. Newton's formula based on absolute motion is a more convenient fiction than prior formulas, but no less a fiction.

In chemistry, the perfect gas is a gas that achieves a fixed and stable condition in which its molecules cease to interact with one another. But such conditions never actually arise. At most, equations for the perfect gas apply exactly to a real gas at one theoretical point, when the state of an ever-changing gas corresponds exactly to that of a perfect gas. Then again, they apply only if the gas reaches that theoretical point, and it can't — which means the gas equations are more metaphysics than physics. 

If a perfect state is impossible to achieve with inanimate objects, it is infinitely more impossible to achieve with human beings. Our minds are in constant flux. The psychologist's concepts, the sociologist's categories, the economist's equations, and the cognitive neuroscientist's principles are all flawed. They depend on a stable state, and yet no one position in life can be maintained in the midst of life's constant motion and innumerable changes. Even if perfect stability could occur, it would do so for only an instant.

The difference between Newton and today's scientific revolutionaries is that the former could conceal the scientific method's defects while the latter cannot. Newton isolated certain celestial phenomena and created an unreal situation through his concept of absolute motion. From there, he derived an equation to estimate planetary motion. That equation works quite well because we test it under conditions that replicate the state of ignorance that Newton created when he limited the conditions of his experiment. He isolated facts and got away with it. Today's revolutionaries, on the other hand, sometimes exhibit a misplaced confidence in the scientific method, believing that they can isolate human variables and apply their concepts in unreal situations. In their case, reality always hits back.

CONSEQUENCES OF THE UNREAL

Newton had warned others not to take the method too far. "To explain all nature is too difficult a task for any one man or any one age. 'Tis much better to do a little with certainty," he wrote.

But his advice was forgotten, as the allure of science and its authority proved irresistible in many disciplines not well-suited for it. When Freudian psychoanalysis dominated the field of psychology in the first half of the 20th century, science was not foremost in psychologists' minds. But in the 1940s and '50s, clinical psychologists began to embrace the scientific model of mental illness. Suddenly, future candidates for their profession were required to earn Ph.D.s. Psychologists began to wear white coats like physicians. Their articles began to read like studies in the Journal of Physics.

This same shift — abandoning the speculative or philosophical study of various aspects of human life in favor of a more scientific and quantitative approach — occurred in other disciplines around the same time. In the 19th century, before it was influenced by the scientific method, economics was called "political economy" and was under the control of philosophers such as John Stuart Mill and Karl Marx. To be an economist, one had to have a worldview; no special mathematical skill was generally required. But by the middle of the 20th century, economics had become highly mathematical. Political science has become a discipline of equations too. And though the notion of "public policy" as a discipline has existed for centuries, degree programs focusing on quantitative analysis did not really begin until the 1930s. Using the scientific method to mine "big data" has since become a defining activity for public-policy professionals. As for the field of "human science," it didn't even exist until the late 1960s. Neuroscience is more logically connected to the scientific method, but "cognitive neuroscience," which ties the brain to human behavior, only came into being in 1976.

Practitioners of various disciplines in the 20th century knew the facts of life were vast and unmanageable and variable from person to person, but rather than satisfying themselves with groping for life's answers through the veil of that reality, as previous generations had done, they used the scientific method to wander outward, seeking something definite and universal in abstraction. Rather than accept life's complexities, they created concepts devoid of the imperfect human element. Rather than use generalizations merely to organize their thoughts, they credited their abstract concepts with a positive and authoritative existence, as an actual representation of facts. Although each person knows himself to be a unity, they carved up people into categories, subcategories, and disciplines, thinking that through fragmentation they would find some deeper meaning about life. In the end, all they did was to travel to the edge of sense and nonsense. This is the crisis in science that we find ourselves facing today.

We see this often in public policy, whose practitioners in government use the scientific method to devise rules for us to live by — including rules that sometimes violate common sense. Often some "study" lies behind the rule, a study using the scientific method and written in the scientific style, and therefore taken to represent unquestionable truth. An example of this is the famous 2007 campus sex study that led to the illiberal tribunals that now try many young college men without due process. The authors abstracted from hundreds of singular lives and relationships to create more generalized categories of sexual harassment, which were then merged into the more universal category of "sexual assault." Using this vast new universal category, suddenly one in five college women had been sexually assaulted, though under any commonsense definition, the number was much lower. This so-called "rape crisis" led to the Department of Education's issuing methods for prosecution that some have likened to the medieval Star Chamber. 

We see similar patterns in psychology. There are more than a million caregivers working in the U.S. mental-health system (a 100-fold increase since 1940), all using some variation of the scientific method. Although much of what these caregivers provide is common sense, they insist on using the scientific forms to vindicate it. For example, it is common sense that a sad person might benefit from friendly conversation, but when the scientific method proves it, the point takes on the aura of scientific knowledge and therefore authority.

To see the problem in action, consider the true story of a middle-schooler whose mother wanted to arrange for her son to get more time to take his math tests. The school administrator told the mother to take him to a psychologist to secure a "formal accommodation." The psychologist found no deficit in the boy's processing speed, working memory, or fluency, so to give him the formal accommodation, the psychologist had to document a disability. So he diagnosed the boy with "depression/anxiety disorder" using sketchy criteria and prescribed psychotherapy, which was required to receive the accommodation. If the mother refused the psychotherapy, her son would not get the accommodation. The mother refused, and the son continued to receive average grades in school.

This mother and her son were victims of the scientific method. Psychologists bundle together certain human attributes and give them names — for example, "anxiety" and "depression." Real people have thousands and thousands of attributes, but psychologists limit the number of attributes so they can form general concepts. Psychologists then create a concept out of these concepts — for example, a "disorder" — thereby broadening the generalization while sacrificing even more depth. The process is not unreasonable; well-defined categories make information easier to comprehend and easier to apply to new cases. And it is a basic principle of the scientific method that the fewer the variables, the cleaner the result. But in their quest for universal concepts, psychologists risk abandoning the real article and taking up with a shadow — that is, taking a concept drawn from a handful of human attributes, declaring it as a "disorder," and applying it to a living person with thousands of attributes.

In another true example, a father insisted that his daughter practice the piano at an early age because neuroscience studies had shown that giving young children intensive music lessons enlarged the brain's corpus callosum. Expert musicians reportedly have larger corpus callosums than non-musicians do, and the father wanted his daughter to have every possible advantage in this regard.

The daughter, who hated practicing, was a victim of the scientific method. Neuroscientists strip off all attributes they think they see in one musician and not in another until they find an attribute, namely the brain, which is common to all musicians. One relevant quality inside the brain is called the corpus callosum, and since it is known only by size, the corpus callosum and size become correlative attributes that neuroscientists find useful to class musicians by, not because they represent the various musicians particularly well, but because they are found in all musicians. It is like classifying people by their shoes — not because shoes are a valuable method of classification, but simply because everyone wears shoes of one kind or another. Having thought away all the other qualities of musicians except the two correlatives of corpus callosum and size, neuroscientists try to explain good musicianship by these two "thinks" that are left. They credit these "thinks" (corpus callosum and size) with an independent existence and proceed to derive an understanding of good musicianship from them.

Or consider a third example of our obsession with science doing more harm than good. On February 2, 1989, a new regulation published in the Federal Register required that mental-health caregivers working with elderly people have a college degree in the human behavioral sciences. My mother, lacking such a degree, lost her job as a social worker at a convalescent hospital after 20 years. Her "common touch" approach to the melancholy and disappointments of old age was deemed inferior to the categories of thought drawn from scientific abstractions and used by credentialed caregivers. The patients at the hospital rebelled; they despised the new, credentialed, young social worker; they sensed they were being considered solely from a clinical point of view. The scientific method had so denuded the young professional's language of the personal, the vital, and the singular that they felt insulted. They wanted my mother back, but the law prevented it.

Our society's obsession with a certain idea of science has resulted in the popular prejudice that the scientific method is the mark of a thinking person, and that those who question its conclusions are "anti-science" or "deniers" of science. At the very least, people feel inclined to give the findings of neuroscience, psychology, social science, and human science the benefit of the doubt as these disciplines gain more influence over our lives.

But in the process, we neglect their limits. As humanity increasingly looks to the scientific method to understand itself, it will inevitably be disappointed by the results. The methods that work on celestial objects — bodies too distant to be knowable — can never produce truly satisfying results at the intimate level of the human. There is simply too much rich complexity to isolate our variables, or to make statements or formulas or theories that can apply to all of us.

BACK TO HUMANITY

As a child, I loved the art of Dr. Frank Netter, the famous medical illustrator. The bodies of the people he drew matched perfectly with bodies I had already seen, but the colors did not. The violet blue of venous blood and the flaming red of a swollen abscess were unlike anything in my reality. Aside from the colors, organs themselves became transformed by Dr. Netter's brush. The stark gray matter of the brain came alive, breathing intelligence. When red muscles were drawn taut, it was as though the body's structure, firmly planted on the page, was resisting a wrenching and oppressive force.

Once I attended medical school, I realized there was none of this in real life. Neither muscle nor blood was beset by a stirring tremor. No colors came ablaze. At the very distance from which Dr. Netter had painted sick patients, I now stood, seeing nothing of what he had seen and wondering how he had.

I assumed I had been deceived by that special intensity that forms all youthful aspirations. Now I was a man of science. To get with the program, I purposely used the most unimaginative, the most scientific words possible when discussing my patients. Rather than say "red," I said "discolored"; "big" became "enlarged." I tried to excise from my vocabulary any words that seemed not just artistic but even remotely human — words such as heavy, light, hot, cold, vitality, right, and wrong.  

This is one way to understand the purpose of the scientific method: Avoid judgments and feelings whenever possible and rely more on measurements, numbers, and physical dynamics. All sciences make the same effort. Biology is denuded of notions of vitality and forces of personality to become a question of cellular affinities, chemical reactions, and laws of osmosis. Physics is a question of atoms and sub-atomic particles. In psychology, social science, and human science, the scientific method prods professionals to go outside of humanity in search of something exact in itself, something that it can call substantial, and then return with abstract concepts created out of generalities to apply to clients. 

During my fifth week in a hospital ward, I had an experience that challenged this view. One morning on my rounds, I talked with an elderly woman whose hair was out of place. She was mostly quiet, though she did say, "I just don't feel well." I dismissed the episode, but when my attending doctor heard about it, she roared into action. The patient was quickly evaluated and transferred to radiology for a ventilation-perfusion scan, which showed that she had suffered a blood clot in her lung. Later, I asked the attending doctor how she knew the patient had been in such trouble. She smiled and said that most elderly women, even the sickest, remain attentive to their hair. Only when they are in extremis do they ignore it. The second hint was the patient's complaint. When old people complain that "this hurts" or "that hurts," it's usually not an emergency, she said. Something in them probably does hurt; getting old means every body part hurts at one point or another. It is when elderly patients are imprecise and say "I just don't feel well" and are unable to blame any specific body part that a serious problem looms.

Life itself — which can only be known and experienced through real interactions with human beings — had taught me more than any controlled experiment could. I learned my lesson that day about the value of using my senses. A doctor must look at and listen to patients, even to the small stuff, and not limit his thinking to general categories.

The scientific method has enormous value in medicine, but it had made me distrust my senses. At first glance, this point seems counterintuitive. The whole purpose of Newton's scientific revolution was to observe rather than to speculate, to use the senses to discover facts and reach objective conclusions rather than to ponder ideas in some medieval study. But the speculators who preceded Newton and who had tried to answer fundamental questions of the universe by merely thinking deeply about them committed another crime in the eyes of purveyors of the scientific method: They inevitably mixed their feelings in with their conjectures.

By relying on his senses and looking outward, Newton avoided the trap of pure guesswork. In the process he created a purely intellectual representation of the universe. His discovery was (and remains) a tremendous practical success. But inherent in the scientific method is the desire to clean all feeling out of fact. In the process, the real is emptied of meaning until it becomes pure generalization. The senses themselves cease to be valued.

In the hard sciences, this defect is worth it. But as the scientific method creeps into the human realm, the desire to be objective, to empty facts of feeling, demands the abandonment of the senses as well as the emotional and spiritual realms of the human person. Trying to create a purely intellectual representation of the cosmos is one thing; trying to create a purely intellectual representation of human beings is quite another.

Reflecting once again on Dr. Netter's medical illustrations can reveal the limits of the scientific method and a path forward for its practitioners in the 21st century. His pictures were not completely accurate. They could not be. Yet it is also wrong to say that he simply thought up or invented his images of the human body. He saw the human being from his point of view. That was the whole point. His eyes absorbed and dispersed the rays of light at an angle special to him. Different impressions reached his nerve centers in their quest for synthesis, for fusion. And he managed to rouse the torpid mass of human flesh, wrest a feverish excitement from axons transmitting signals or blood pulsating through arteries, and communicate to viewers like me the emotion engulfing him.

With his senses, he studied bodies and their parts, yet behind his senses was a unity — a single individual with physical, intellectual, emotional, and spiritual facets, as complex as a universe. In that single unity, fact could not be divorced from feeling; to understand humanity, the two could not be separated. This was Dr. Netter's insight. In his art he tried to understand another human being in the same way he tried to understand himself. It was not the scientific method. It was life.

The scientific revolution has been an enormous success; it has improved our health and prosperity while helping us to better understand the natural world. But in their zeal to apply the scientific method to the complexity of humanity itself, scientific revolutionaries sometimes push too far. The time has come to pause the second phase of the scientific revolution — to recover a more humble and skeptical approach to what the scientific method can achieve, to unite the emotional, spiritual, and intellectual dimensions of life, and to find our way back to our humanity.

Ronald W. Dworkin is a physician and political scientist. His work can be accessed at RonaldWDworkin.com.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.