Cheaper than printing it out: buy the paperback book.

Out of Control
Chapter 23: WHOLES, HOLES, AND SPACES

Good morning, self-organizing systems!"

The cheerful speaker smiled with a polished ease and adjusted his tie. "I am indeed very happy to find the Office of Naval Research joining with the Armour Research Foundation in organizing this conference on what I personally consider an exceedingly important topic, and at such a well-chosen time."

It was a spring day in early May, 1959. Four hundred men from an astoundingly diverse group of scientific backgrounds had gathered in Chicago for what promised to be an electrifying meeting. Almost every major branch of science was represented: psychology, linguistics, engineering, embryology, physics, information theory, mathematics, astronomy, and social sciences. No one could remember a conference before this where so many top scientists in different fields were about to spend two days talking about one thing. Certainly there had never been a large meeting about this particular one thing.

It was a topic that only a young country flush with success and confident of its role in the world would even think about: self-organizing systems -- how organization bootstraps itself to life. Bootstrapping! It was the American dream put into an equation.

"The choice of time is particularly significant in my personal life, too," the speaker continued. "For the last nine months the Department of Defense of the United States of America has been in the throes of an organizational effort which shows reasonably clearly that we are still a long way from understanding what makes a self-organizing system."

Hearty chuckles from the early morning crowd just settling into their seats. At the podium Dr. Joachim Weyl, Research Director of the Office of Naval Research, beamed and continued. "There are three basic elements I'd like to call to your attention which can be studied best. From the area of computers we will, in the long run, draw our essential understanding of the element of memory that is absolutely and inevitably present in what you might call in the future 'self-organizing systems.' You might go so far, as I have done, as to say that a computer is nothing but a means for a memory to get from one state to another.

"The second element biologists call differentiation. In any system that will evolve it is quite clearly necessary that you have what the geneticists have called mutations, essentially random events. Some initial triggering mechanism is needed to push one group in one direction, and another in another direction. In other words, environment containing noise has to be relied on to furnish the triggering mechanism on which the long-term selection rule will operate.

"The third basic element probably presents itself most purely and most accessibly when we are dealing with large social organizations. Let me call it, for the purpose here, subordination, or if you wish, the executive function."

There they were: signal noise, mutations, executive function, self-organization. These words were spoken before the arrival of the DNA model, before digital technology, before departments of information management systems, and before complexity theory. It is difficult to imagine how alien and innovative these ideas were at the time.

And how right. In one fell swoop 35 years ago, Dr. Weyl outlined my whole 1994 book on the breaking science of adaptive, distributed systems and the emergent phenomenon they engender.

While the prescience of the 1959 meeting is remarkable, I also see something remarkable on the other side: how little our knowledge of whole systems has advanced in 35 years. Despite the great progress made recently and reported in this book, many of the basic questions about self-organization, differentiation, and subordination of whole systems still remain mysterious.

The all-star lineup who presented papers at the 1959 conference was a public rendezvous of scientists who had been convening in smaller meetings since 1942. These intimate, invitation-only gatherings were organized by the Josiah Macy, Jr. Foundation, and became known as the Macy Conferences. In the spirit of wartime urgency, the small gatherings were interdisciplinary, elite, and emphasized thinking big. Among the several dozen visionaries invited over the nine years of the conference were Gregory Bateson, Norbert Wiener, Margaret Mead, Lawrence Frank, John von Neumann, Warren McCulloch, and Arturo Rosenblueth. This stellar congregation later became known as the cybernetic group for the perspective they pioneered -- cybernetics, the art and science of control.

Some beginnings are inconspicuous; this one wasn't. From the very first Macy Conference, the participants could imagine the alien vista they were opening. Despite their veteran science background and natural skepticism, they saw immediately that this new view would change their life's work. Anthropologist Margaret Mead recalled she was so excited by the ideas set loose in the first meeting that "I did not notice that I had broken one of my teeth until the Conference was over."

The core group consisted of key thinkers in biology, social science, and what we would now call computer science, although this group were only beginning to invent the concept of computers at the time. Their chief achievement was to articulate a language of control and design that worked for biology, social sciences, and computers. Much of the brilliance of these conferences came by the then unconventional approach of rigorously considering living things as machines and machines as living things. Von Neumann quantitatively compared the speed of brain neurons and the speed of vacuum tubes, boldly implying the two could be compared. Wiener reviewed the history of machine automata segueing into human anatomy. Rosenblueth, the doctor, saw homeostatic circuits in the body and in cells. In Steve Heims's history of this influential circle of minds, The Cybernetics Group, he says of the Macy Conferences: "Even such anthropocentric social scientists as Mead and Frank became proponents for the mechanical level of understanding, wherein life is described as an entropy-reducing device and humans characterized as servomechanisms, their minds as computers, and social conflicts by mathematical game theory."

In an age when popular science fiction had just hatched, and was not the influential element it now is in modern science, the Macy Conference participants often pushed the metaphors they were playing with to extremes, much as science fiction writers do now. At one conference McCulloch said, "I don't particularly like people, never have. Man to my mind is about the nastiest, most destructive of all the animals. I don't see any reason, if he can evolve machines that can have more fun that he himself can, why they shouldn't take over, enslave us, quite happily. They might have a lot more fun, invent better games than we ever did." Humanists were horrified by such speculations, but under this nightmarish, dehumanized scenario some very important concepts were buried: that machines might evolve, that they might really be able to do practical intellectual chores better than we could, and that we share operating principles with very sophisticated machines. These are very much metaphors of the next millennium.

As Mead wrote later of the Macy Conferences, "Out of the deliberations of this (cybernetics) group came a whole series of fruitful developments of a very high order." Specifically, the ideas of feedback control, circular causality, homeostasis in machines, and political game theory were born there and gradually entered the mainstream until they became elemental, almost cliché, concepts today.

The cybernetic group did not find answers as much as they prepared an agenda for questions. Decades later scientists studying chaos, complexity, artificial life, subsumption architecture, artificial evolution, simulations, ecosystems, and bionic machines would find a framework for their questions in cybernetics. A short-hand synopsis of Out of Control would be to say it is an update on the current state of cybernetic research.

But therein lies a curious puzzle. If this book is really about cybernetics, why is the word "cybernetics" so absent from it? Where are the earlier practitioners of such cutting-edge science now? Why are the old gurus and their fine ideas not at the center of this natural extension of their work? What ever happened to cybernetics?

It was a mystery that perplexed me when I first started hanging out with the young generation of systems pioneers. The better-read were certainly aware of the early cybernetic work, but there was almost no one from a cybernetic background working with them. It was as if there was an entire lost generation, a hole in the transmission of knowledge.

There are three theories about why the cybernetic movement died:

  • Cybernetics was starved to death by the siphoning away of its funding to the hot-shot-but stillborn-field of artificial intelligence. It was the failure of AI to produce usefulness that did cybernetics in. AI was just one facet of cybernetics, but while it got most of the government and university money, the rest of cybernetics' vast agenda withered. The grad students fled to AI, so the other fields dried up. Then, AI itself stalled.

  • Cybernetics was a victim of batch-mode computing. For all its great ideas, cybernetics was mostly talk. The kind of experiments required to test its notions demanded many cycles of a computer, at its full power, in a completely exploratory mode. These were all the wrong things to ask of the priesthood guarding the mainframe. Therefore, very little cybernetic theory ever made it to experiment. When cheap personal computers hit the world, universities were notoriously slow to adopt them. So while high school kids had Apple IIs at home, the universities were still using punch cards. Chris Langton started his first a-life experiments on an Apple II. Doyne Farmer and friends discovered chaos theory by making their own computer. Real-time command of a complete universal computer was what traditional cybernetics needed but never got.

  • Cybernetics was strangled by "putting the observer inside the box." In 1960, Heinz von Foerster made the brilliant suggestion that a refreshing view of social systems could be had by including the observer of the system as part of a larger metasystem. He framed his observation as Second Order Cybernetics, or the system of observing systems. The insight was useful in such fields as family therapy where the therapist had to include him- or herself in a theory of the family they were treating. But "putting the observer into the system" fell into an infinite regress when therapists video-taped patients and then sociologists taped therapists watching the tape of the patients and then taped themselves watching the therapists....By the 1980s the rolls of the American Society of Cybernetics were filled with therapists, sociologists, and political scientists primarily interested in the effects of observing systems.

All three reasons conspired so that by the late 1970s cybernetics had died of dry rot. Most of the work in cybernetics was at the level of the book you are now reading: armchair attempts to weave a coherent big picture together. Real researchers were bumping their heads in frustration in AI labs, or working in obscure institutes in Russia, where cybernetics did continue as a branch of mathematics. I don't believe a single formal textbook on cybernetics was ever written in English.

continue...