Purpose and Desire Page 6
The great Norbert Wiener (1894–1964) of the Massachusetts Institute of Technology laid the groundwork for realizing this vision, in machines that ranged from the mundane (like a self-guided torpedo) to the sublime (like a computer).14 What came out of this vision was the clockwork homeostasis: the homeostasis machine that did the deed, but with its vital core safely reamed out.
Like Bernard, Wiener drew his inspiration from the seemingly universal tendency of living systems to self-regulation, in forms ranging from simple maintenance of an internal property like body temperature, to goal-seeking behavior, to intelligence and learning. Where Bernard saw a fundamental property of life, though, Wiener saw a machine at work. If a machine could be made that could mimic life’s machinery of homeostasis, that machine would behave as if it were homeostatic, that is, self-regulating. And that would be good enough.
Wiener’s early World War II work on self-aiming antiaircraft guns illustrates the problem admirably. Wiener described the problem in this way:
The antiaircraft gun is a very interesting type of device. In the First World War, the antiaircraft gun had been developed as a firing instrument, but one still used range tables directly by hand for firing the gun. That meant, essentially, that one had to do all the computation while the plane was flying overhead, and, naturally, by the time you got in position to do something about it, the plane had already done something about it, and was not there.
It became evident—and this was long before the work that I did—by the end of the First World War, and certainly by the period between the two [wars], that the essence of the problem was to do all the computation in advance and embody it in instruments which could pick up the observations of the plane and fuse them in the proper way to get the necessary result to aim the gun, and to aim it, not at the plane, but sufficiently ahead of the plane, so that the shell and plane would arrive at the same time as induction. That led to some very interesting mathematical theories.15
At one level, then, the problem of shooting an airplane out of the sky is quite simple: aim a gun at a target so the projectile and target meet at the time of explosion. It is so simple an idea, in fact, that we scarcely give a thought to what goes into successfully connecting projectile to target. Information (the visual image of the target) must be analyzed for information about its physical location: its distance, its elevation, its trajectory in space. A device, the gun, must be aimed at it, so that a projectile launched from the gun will arrive at the target’s future location and detonate there. Shooting down an aircraft is, in short, a problem of information flow and management: information about the environment (the location and trajectory of a target) comes in and is used to inform the operation of a machine (the gun) that will change that environment (obliterate the target).
As long as aircraft were slow, low-flying, and fragile, a skilled artilleryman could do all this on his own, and early antiaircraft warfare was little more than that—cannon fire directed against stationary or slow-moving targets, like an enemy’s observation balloons. For a mobile target, like a fighter plane or bomber, the problem quickly becomes formidable. Altitude, speed, and direction of the target and time of the projectile’s flight to the target all have to be taken into account. As warplanes became higher-flying, faster, and more maneuverable, the speed of reckoning needed to place the projectile at the target quickly outstripped the capacity of human brains and human-operated machines. The obvious solution to this problem was to develop machines that could do the reckoning faster and to make the machine, not the gunner, operate the weapon. The artilleryman’s own reckoning skill could then defer to the mechanical “brain” that actually aimed and fired the gun. Some progress toward this goal occurred during the First World War, but the perilous peace that followed added urgency to the endeavor, so that a furious arms race unfolded between development of ever faster and more maneuverable aircraft and the ever faster and more sophisticated automation of aiming, fusing, and firing projectiles to blast those aircraft out of the sky.*16
Square into this problem came Wiener, child prodigy, mathematical genius, polymath, and prophet of cybernetics—the realm of the self-directed machine. (The word “cybernetics” translates literally from the Greek kybernetikos, rendered as “good at steering” or “good pilot.”) The outlines of Wiener’s biography are astonishing: graduation from high school at age eleven, bachelor’s degree in mathematics from Tufts University at age fourteen, Harvard doctorate in zoology by age seventeen—maintaining through all this a serious study in philosophy. This remarkable intellectual career underscores Wiener’s most salient trait as an engineer: he was never content simply to find a practical solution to a problem, but was compelled to burrow deep into it, always to seek the underlying principles of engineering problems, indeed the very philosophy of the problem. During World War II, the engineering problem that consumed Wiener was the aiming of antiaircraft weapons, which, despite all the technological advances in ranging and tracking that preceded him, still performed dismally.
Wiener saw that the fundamental problem was not just one of ranging, or of better mechanical tracking, or of improved fuse timers or triggers; it was a problem of information and uncertainty. Information about a target—its distance, elevation, and trajectory—came into a tracking system. These attributes all carried certain rates of change and error. That information then had to be fed into a machine that calculated some probability of a specified future state, namely, where the target likely would be at the end of a projectile’s flight. Once this analysis was in hand, the gun would have to track its aim to the target’s less-than-perfectly-predictable trajectory and make a decision whether to launch the projectile or to wait for a bit in case more information later would produce a more favorable outcome.
What was needed, in Wiener’s opinion, was a general theory and formal mathematics of this ongoing process, which came to be embodied in the simplest cybernetic system: the closed-loop negative feedback controller. Wiener did not invent the idea of negative feedback control: the famous Watt governor for steam engines, also a negative feedback controller, had been invented a century before (Figure 4.1). Steam engines, however, were simple, stupid, and predictable, and this meant that the machines that controlled them, like the Watt governor, could be stupid, simple, and mechanical.* In contrast, a fighter plane or bomber steered by a pilot intent on a target was clearly intelligent, devious, and motivated in ways that steam engines were not. This meant that machines that sought to shoot them down had to be intelligent in a way that a Watt governor did not have to be. Wiener saw clearly that a far more general theory of intelligent, goal-seeking agents needed to be developed. Wiener’s approach to the problem—delving into its philosophical core—meant that once machines could be made to behave “intelligently,” it would be possible to do much more than simply aim antiaircraft guns. The way would be open to developing truly intelligent machines.
Figure 4.1
The Watt governor, or centrifugal governor, invented in 1788 by James Watt.
Wiener’s closed-loop negative feedback controller had a particular resonance for physiologists. Indeed, it was Wiener’s training in zoology that inspired him to think of them. So, it’s worth delving into some of the details of how this type of cybernetic device works.
The closed-loop negative feedback controller consists of several subsidiary machines hooked together in a particular way (Figure 4.2). At the heart of the controller is a device called a comparator. As the name implies, this device compares information from two sources. The first, the set point, is an internal source that specifies a desired state of the environment, and the second, the signal, is information about the state of the environment as it is. In the example of a self-aiming gun, the set point might be to position the target squarely in the center of the gun’s crosshairs, while the signal would be the actual location of the target with respect to the crosshairs.* The comparator takes these data and calculates an error signal, which encodes any mismatch between the set point
and signal, in this instance, the deviation between where the aircraft is and where the gun is aimed. The error signal is then sent to a series of effectors: in a self-aiming gun, these will be the motors and gears that rapidly swivel the gun on its mounts to adjust azimuth and elevation angle. As the gun moves, the relative positions of the aircraft’s image and the center of the crosshairs will change. This will generate a change in the signal, which will feed back onto the comparator, which can generate a new error signal, which adjusts the aim again. Thus, information flows in a continuous closed loop between comparator and effector, all engineered with the goal of negating any deviation between set point and signal, in this instance, between the position of the target in the sight and the center of the crosshairs. It is, in short, a negative feedback controller.
Figure 4.2
A closed-loop negative feedback controller.
That’s simple enough to grasp, and if everything is calibrated and working properly, the controller will continuously work to minimize the error signal, that is, to keep the moving target positioned squarely in the sight’s crosshairs until the projectile is launched. It’s the “if” in that last sentence that’s the problem, though: what precisely do we mean when we say the system is calibrated “properly”? In fact, “proper” calibration is a delicate balance between sensitivity and response of the controller’s various parts. If the system is too sensitive to perturbations in how it senses the actual state or if the response of the motors is too emphatic, the gun will swivel wildly, “hunting” all over the sky but never being able to zero in on the target. If the miscalibration is the other way, the launched shells will always end up at the place where the target had been rather than where it will be. This is the problem Wiener solved, setting down the theory for optimizing the performance of any closed-loop negative feedback controller. In so doing, he laid the foundational theory for all self-controlling mechanical systems.
Once Wiener systematized and mathematized the concept, the skies opened, so to speak. They were pried wide in no small part by Wiener himself, who was much more than just a mathematician and an engineer. He saw in the negative feedback controller, indeed in all such “well-steered” or “cybernetic” mechanical systems, the solution to the philosophical problem that had bedeviled biology for centuries, namely, how to explain life’s obvious purposefulness without having to resort to the mysticism that it implied.17 If living things were themselves cybernetic systems, then all that troubling philosophical baggage would fall away. Open now was the secret of life, of the mind, of the organization of society and economies, of politics, of health and illness. And in the years following World War II, which had been won just as much by those with the slide rules as by those bearing rifles and steering the tanks and ships and piloting the planes, plenty of people were ready to take up the challenge and to bring cybernetic technology to the service of the new world a-dawning.
Looking back on those heady first days of cybernetics, it’s hard not to be swept up in the enthusiasm and technological optimism the field engendered. To get a glimpse of this brave new world, one could, for instance, take a stroll through the proceedings of the famous Macy cybernetic conferences and their intriguing titles—“The Algebra of Conscience” is my personal favorite. Remarkable personalities participated, including a stellar pantheon of the advanced thinkers of the day drawn from fields as diverse as computer science (Wiener himself), psychiatry and neurobiology (Warren McCullough), mathematics (Wiener’s archrival John von Neumann), anthropology (Margaret Mead), behavioral and cognitive science (Gregory Bateson), genetics (Max Delbrück), and many more.18
But you can also get an idea of the climate by paging through any popular science magazine from the 1950s, such as Scientific American or Popular Science. In the advertisements, the articles, and the commentary, the vision was enticing: the door was opening to a new age and cybernetics was the key that would open it. Machines would now serve us, taking over the many drudgeries that consumed our everyday lives: self-operating machines would steer our vehicles on the roads, over the seas, and through the air; robots would stand tirelessly on assembly lines doing repetitive drudge work for months, not hours, at a time; machines would even do our thinking for us. And legions of men in short-sleeve white shirts and crew-cuts would take us there, guided by the genius of cybernetics.
It wasn’t all sunny uplands, of course. Much of the technological boosterism that oozes from the pages of these magazines was fueled by military necessity. We were cheerfully told that even a missile could have a high IQ and home in on enemy “pigeons” like a stealthy falcon (Figure 4.3). Looking back on those days leads one to reflect on just how dangerous the world was then and how fortunate we are to have come through it intact. One could also find subtler but equally dark shadows draping other areas of the social landscape. When the cyberneticians turned their attentions to people and societies, for example, the enthusiasm and optimism that buoyed up smart missiles could easily lapse into bone-chilling arrogance.19 Consider how the behaviorist B. F. Skinner put the prospects for engineering human societies along cybernetic principles: “All men control and are controlled. The question of government . . . is not how freedom should be preserved, but what kinds of controls are to be used and to what ends.”20 Batteries weren’t included, presumably.
Figure 4.3
The cybernetic promise of the 1950s.
But, back to homeostasis. If cybernetics found its inspiration in biology’s self-regulating systems, then physiologists in the 1950s began to look to cybernetics to return the favor: perhaps cybernetics could go beyond merely imitating homeostasis and solve the very problem of living homeostasis itself? And so was born the notion of homeostasis as the outcome of a negative feedback control machine. The clockwork homeostasis of Norbert Wiener and his many acolytes was adopted into the biological family.
Opinion will, of course, differ on how that’s all been working out. On the one hand, remarkable insights and benefits have flowed from a cybernetic approach to difficult physiological problems. Certain types of movement disorders, like the tremors of Parkinson’s disease, can be explained remarkably well as the “hunting” of a poorly tuned negative feedback controller, as I just described, and this has led to some very successful cybernetic-inspired treatments for such debilitating conditions.21 Recent remarkable development of “smart” prosthetic appendages likewise owes its success to understanding the essential cybernetics of limb motion and incorporating it into mechanical limbs.22 Likewise, the head-spinning recent advances of artificial intelligence, of computers that can learn to play Pong from scratch or that can win at the game show Jeopardy, were made possible largely by intelligence being treated as a cybernetic system of learning and feedback.
On the other hand, there have been failures, and the reasons for the shortcomings are instructive because they underscore the ultimate inadequacy of conceiving of life as if it were a machine. I want to illustrate this using a particular kind of clockwork homeostasis as my example: regulation of body temperature. I’ve chosen this in part because body temperature homeostasis happens to be the field where I cut my professional teeth, but also because temperature homeostasis was among the first regulatory problems that physiologists came close to taming by cybernetics. It also illustrates how the machine metaphor for homeostasis can unravel.
Figure 4.4
The living thermostat?
Body temperature is clearly a regulated property. If an animal’s body temperature exceeds some value, in ourselves about 38°C, heat loss mechanisms like sweating or panting or flushing of the skin are activated and heat generation is dialed down. If the body cools to a temperature lower than this, heat retention and heat generation mechanisms, like shivering or withdrawal of warm blood from the skin, are activated (Figure 4.4). The result is a body temperature that is steady as she goes, sustained at this target body temperature, the set point temperature, even as environmental temperatures and circumstances vary widely.
Armed
with the metaphor of the clockwork homeostasis, physiologists in the 1950s began the search for the components of the body’s living thermostat. For a time, the quest yielded great dividends. The first breakthrough precisely located what seemed to be the brain’s “thermostat.” By penetrating the brains of sheep and dogs with tiny probes that could heat or cool local patches of brain tissue, physiologists quickly located the putative thermostat in a small region at the base of the brain, just in front of the pituitary, known as the preoptic anterior hypothalamus, or POAH.23 By heating this patch, the brain could be tricked into thinking the body temperature was too high and the body would anomalously begin to dump the supposedly “excess” heat. By cooling the patch, the trick could be reversed: the brain would think the body was cooler than it actually was and would direct the body to conserve heat and stoke up the body’s furnaces.
Once the POAH thermostat was found, the other putative components of the brain’s negative feedback controller started turning up. Certain neurons in the POAH, for example, would fire at a rate proportional to POAH temperature; these had to be temperature sensors, encoding temperature in their firing rates. Other POAH neurons would fire at a steady rate, no matter what the temperature; these had to be the sources for the set point. To add icing to the cake, the set point even seemed to be adjustable, like the thermostat of a house. The puzzling problem of fever, for example, was shown not to be a breakdown of thermoregulation, as physicians had long thought it to be, but a simple upward adjustment of the brain’s “thermal set point.”24 Just as house temperature could be made to rise by turning the dial up on the thermostat, so too could body temperature be made to rise or fall with the adjustment of the brain’s thermostat. It all made such satisfying sense. Thermal homeostasis was solved.