caustique
Drama is a mirror in which nature is reflected. But if this mirror is an ordinary mirror, a flat, even surface, it will only reflect a dull, unrelieved image of objects, faithful but discolored; we know what color and light lose in simple reflection. Drama must therefore be a mirror of concentration which, far from weakening them, gathers and condenses the coloring rays, turning a glow into light, a light into a flame. Only then is drama the avowed art.

Foreword to Cromwell
Victor HUGO


Other Topics

Centre Royaumont pour une Science de l'Homme
Genetically modified organisms
Order / disorder /cruelty
In praise of diversity

 

How René Thom changed molecular biology

This english version of an original in French was read by Jean-Pierre Bourguignon at the Institut des Hautes Études Scientifiques for  the celebration of the 100th anniversary of the birth of René Thom

As we celebrate the anniversary of René Thom's birth, it may be time to review how his thought influenced molecular biology, whose presuppositions he was so critical of. The prophetic nature of his vision is revealed in an exchange with Antoine Danchin, following a discussion on determinism that began in the magazine Le Débat. This is what I shall attempt to bring to light in this tribute. What he criticized about this still young science was the anecdotal nature of many of the objects or processes it highlighted, as well as its apparent lack of generality. He mocked the “arm-cutters”, “head-cutters” and other similar enzymes that were supposed to explain life. A mathematician at heart — but much more Aristotelian than Platonist — he was only interested in phenomena leading to general laws. What he understood by life was not the collection of organisms or the objects that make them up, but their form and the “animate” character that, for example, led the morphological evolution from embryo to adult organism.

In an exchange of letters to me on April 3, 1981, René Thom wrote as follows:

What I criticize about Molecular Biology is the assertion, raised to the level of a ritually repeated dogma, that everything in biological organization can be reduced to molecular interactions, an assertion that it is certainly not possible to invalidate, since living beings are made of molecules. The problem is whether the description of molecular interactions oriented by higher levels will not require the introduction of broader entities - such as “fields”, susceptible to both biochemical and “vitalist”definitions.

Vitalism is the preconception that there is a principle specific to life that accounts for its “animate” aspect, as seen in the details of movement in living beings. The manifestations of this animation have been described over the centuries in a great many forms, often organized in hierarchies, distinguishing in particular the “vegetative” animation of plants, which is found at the most elementary level of animation in animals, which are also endowed with a higher degree of animation. This animal animation corresponded to sensitivity, and then, in the animal chosen as the ultimate reference, man, to intellectual activity. It will come as no surprise to find René Thom's view, validated by his in-depth reading of Aristotle. The question for the experimental biologist, then, is whether there is not a family of biological functions — function, a term used in mathematics, is a very ill-defined concept in biology — that would account for this animation. This would make it possible to explain this enigmatic vitalism through an original concept that would bring to light hitherto unknown physical principles presiding over the constitution of biological chemistry.

To do this, we first had to take into account the points of agreement that united us, as René Thom remarked in the same letter:

My dear Danchin,
Thank you for your long letter. Reading you, I have the impression that our positions are not very far apart. If you accept the ideas : a) Determinism (in principle)  b) Ontological priority of continuity c) (Relative) autonomy of each level of morphological organization then that's already [quite a few] words crossed out a nice basis for agreement. But I am still "wandering without light", as Valéry put it, on the problem of the very definition of levels of organization. Put bluntly, here is the question: are there formal (morphological) criteria for distinguishing "living" morphology, or morphology resulting from the action of a living being, from morphology due solely to the action of the forces of inanimate nature?

Let us explain these ideas. Point a) is critical, all the more so as a certain fashion tends to pretend that determinism can be dispensed with. Yet this opens the door to all manner of fantasies, leaving open the possibility of the magical action of chance, at the whim of conceptual demand. In a series of very violent interventions, Thom was particularly keen to avoid this trap, replacing the idea of chance with the simple conjunction of independent causal series. Introducing contingency rather than chance at the origin of the logic of the deterministic chain of causes made it possible to understand the apparent absence of a priori logic in the course of events. The fashion, still dominant today, of rejecting determinism obviously led to the dissemination of vague concepts that he abhorred. He concludes his letter of October 9, 1981 with this:

Let us finish with Prigogine. The fact that almost the entire scientific community has allowed itself to be fooled by this swindler speaks volumes about the state of lack of (scientific) culture of the vast majority of scientists. There is no logical link between order and dissipativity:

Order Dissipativity
+ Crystal (homogenous temperature)
+ + Crystal (with temperature gradient)
+ + Bénard convection (« dissipative » structure)
+ Hydrodynamic turbulence
Hamiltonian systems « Anosov type » (hard-molecule gases)

And yet one believes that there is a thermodynamics of the irreversible that would explain living matter...

There, we have a strong point of agreement. It is a great pity, then, that René Thom, like the author of these lines, was unaware that Rolf Landauer, a renowned physicist whom we will meet later, had also established the inanity of Prigogine's work in a series of papers summarized in a very thorough article in the Annals of the New York Academy of Sciences: The role of fluctuations in multistable systems and in the transition to multistabiity [NYAS 316: 433-452 (1979)] dedicated to the author of these lines.

As he has spoken at length on this subject (“Stop Chance! Silence Noise!”), we will not develop René Thom's thought on this theme any further. Determinism should not be made to say more than what it means, nor should we be afraid of it in the name of a very primitive idea of what freedom is. We will simply stress here that what is important, when we appeal to determinism, is not to confine ourselves to 18th-century mechanics, where what is determined is also predictable. It is this fairly archaic way of thinking that often leads to the rejection of determinism. In fact, not only does Lorenz's image of the butterfly show that the consequences of determinism are quite different, and above all — and this brings up a point that I would very much have liked to discuss with René Thom — that there is a domain, although based on the discrete nature of integers, where the determinate is in essence unpredictable. This is illustrated with the consequences of many recursive algorithms: the course of an algorithm of this type can be both completely deterministic and perfectly unpredictable. This point, highlighted by John Myhill as early as 1957, then Douglas Hofstadter in his Gödel, Escher, Bach, An Eternal Golden Braid back in 1979 and emphasized in 1988 by Rolf Landauer's colleague at IBM, Charles Bennett, and which I discussed at length in The Delphic Boat, obviously brings us to point b).

Contemporary molecular genetics is based on an algorithmic description of gene expression. Yet this is eminently discontinuous, in contrast to Thom's emphasis on continuity. The question, then, is whether the concrete implementation of an algorithm in matter endowed with mass would not lead to the uncovering of constraints on reality whose characteristics would be inherently continuous. This would put us back on a common plane. In short, to the basic categories of reality: mass / energy / space / time, would not it be appropriate to add an additional category — why not, continuous or linked to a continuous foundation? This question, which I owe to René Thom's reflections on “information”, has fascinated me for all these years, and it is a few elements of an answer that I would like to offer today, again with reference to his own words. To try and understand his point of view, it is interesting to quote another passage from this same letter, where the insistence on a “vitalist” vision of the phenomenon of life appears once again, at least as a heuristic. Thom proposes the existence of entities in reality that behave like “fields” that can be formally manipulated without having to go into the details of the objects involved:

And that a mode of explanation based on "vitalist" interpretation could be faster than the patient and tortuous deciphering of the biochemical or biophysical basis of these "fields".

But before illustrating what a solution might look like, we must still consider point c). This requires a very elaborate response, which is linked to the status of experiment in biology, and which I can only sketch out in what follows. Indeed, René Thom demands not only recognition of the autonomy of levels of organization, but also the generativity of any worthwhile approach, as he points out in this letter:

The lack of generativity of formalizations in Biology is one of the major theoretical obstacles that the theory of catastrophes - elementary or otherwise - has not managed to overcome. All the more reason to look!

whereas earlier (on April 3, 1981) he had stressed the fact that it is not necessary to know the details of material entities (implied to be “endowed with mass”, as the objects making up the cell are for molecular biology; we all stumble over the omnipresent confusion between “mass” and “matter”):

But the legitimate possibility of abstractly using theoretical entities whose material basis we do not know seems to me a methodological imperative for which I will fight to the bitter end.

It seems to me that there is an opening to answer these questions, if we introduce into our descriptions of the world a fifth currency of reality, the category most often referred to as “information”. We know that in Physics, this is a well-accepted concept. However, although the word is used and omnipresent in biological discourse, it is curiously vague and has no deep conceptual implication. The only mathematical definition in which the term is commonly used is Shannon's, and it is a discontinuous description, often combined with a probabilistic description that gives it the flavor of continuity. However, the context in which this term is used is not really information in the deepest sense of the term, as René Thom would have wished, but communication: how can a message, understood as a sequence of symbols, be transmitted without error, without concern for its meaning? This is undoubtedly one of the reasons why, as far as I know, Thom only sketched out what could be a “theory of information” in a general sense, while at the same time questioning the epistemological nature of this concept. It goes without saying that Shannon's vision is incredibly poor and cannot be the information that inhabits the physical reality of the world. This is particularly true of Biology, where signification, meaning, plays a central role. Somewhere along the line, its contextualization needs to be brought out, which I believe would be an opportunity to bring out one of the “fields” summoned by Thom.

It is clear that the meaning of a DNA molecule is defined by the context in which it is placed. A striking experimental demonstration of this fact was given in Japan when the complete genome of a blue-green alga was implanted in hay bacillus: we observe nothing other than, when the host multiplies, the creation of a replica of this foreign genome. This occurs without any particularity other than a certain slowdown in its growth. However, when placed in the parent alga, this same genome will obviously give the cell all its recognized properties — including morphology — and define it as a microbial species. Specifically, in this case, this involves its ability to fix carbon dioxide in the presence of light. This remarkable experiment demonstrates that there is indeed a minimal amount of information in DNA placed in a living context, that which leads to its exact copy, its replication. This corresponds to information à la Shannon, but what then of the information that directs the synthesis of a blue-green algae, and enables it to react to the Sun?

Rolf Landauer has championed the idea, accepted by many but not all, that “information is physical”. For him, it defines a genuine currency of reality, even if we do not yet know what that means in detail, and it is something much wider than Shannon's “information”. It seems to me that following the developments in Landauer's thinking is an avenue that would undoubtedly have inspired René Thom. It is a real pity that these two currents of thought, developed in parallel, did not cross paths before their death (1999 and 2002). This vision de facto imposes information as a fifth category of reality. Accepting this opens the way for Biology to enter the realm of Physics, via a concept that had hitherto been purely metaphorical. If information is a genuine component of life, understanding the details of its management within the cell becomes critical.

Rolf Landauer is famous for what is often referred to as the “Landauer principle”. Published in 1961, this principle is still largely ignored, despite the various details already discussed by Charles Bennett — one of the fathers of quantum cryptography — in 1988. Here we have an example of the same type as the very long occultation of Mendel's work. This principle is based on two demonstrations. On the one hand, performing a calculation can be done reversibly — without consuming energy, therefore — and on the other, energy is indeed dissipated somewhere, but what costs energy is the erasure of the memory that had to be used to perform the calculation. We can already see here that this principle involves the coupling of two dynamics: a slow dynamic, which can create information, and a fast dynamic, which makes the result of this creation irreversible. That this is indeed a physical reality can be seen in its implementation in “adiabatic calculation”, for example, carried out by a family of microprocessors, but also in many recent experiments where a certain amount of information has been transmuted into energy. This demonstration is remarkable because it runs counter to a widely shared intuition, which would have us believe that it is the process of creating information that is energy-intensive. Do we not see here the introduction of what could be described as a catastrophe?

It is in the essence of catastrophe theory to presuppose two time scales: slow dynamics (relating to external or control variables) and fast dynamics (relating to internal or state variables). A Klein-Gordon soliton-type phenomenon does not fit into the catastrophic scheme (whereas a forest fire does). Catastrophic schemes are therefore particularly well-suited to the description of articulations between two levels of description: a fine level, where fast dynamics prevail, and a coarse level, where slow dynamics prevail.

In order to follow René Thom's imperative to identify a general principle that would enable us to understand the “animation” of biological chemistry, it seems necessary to ask ourselves the question and to search, among all biological functions, for at least one general family that would answer the question. The functions in question must also illustrate the dynamics described above. To do this, we can first ask ourselves what are the concrete objects of biology whose existence should be explained. Then we can generalize. Thom, inspired by his childhood very concrete railway switches, could have accepted this approach.

Let us start by identifying certain properties that affect the very fact of living. Aging is an obvious one. Material things (those with mass) wither over time. Gradually, the cell mixes old entities with their younger counterparts, and loses its exquisite capacities. If a particular assemblage has a well-defined structure (associating its elements through specific relationships), this structure withers and disappears. To avoid this fate, a specific process must be able to distinguish between relevant and irrelevant entities, and discard what is dysfunctional. Thus, the main characteristic of the functions to be discovered is that they must be able to discriminate between classes of objects.

The ability to discriminate is at the root of what we generally call “decision”. We still do not quite understand why the standard physical laws that apply to Biology give rise, in this area of reality, to entities that appear to be “animate”. Of course, a fire or a river move, they have a kind of animation, but in the case of life, this animation seems to have a purpose. It can “decide” to direct its movements, or separate classes of objects. But is not this just the beginning of a typically “vitalist” behavior? To avoid being inundated with non-functional entities, the cell must be able to discriminate constantly between what is young and what is old, and also what has been affected by chemical accidents, for example. And the scope of what needs to be discriminated is much wider. More subtly, the growth and subsequent division of cells involves not only morphological changes, but above all, as the devil is in the detail, the correctly distributed localization of their components. For the long polymers that make up the heart of the cell, this also presupposes shaping involving functional folding. Of course, the identity of the cell — how it differs from its environment, and from other cells in particular — is also a crucial property to explain. This is a central conceptual question, summed up by the need for living phenomena to constitute classes and to act in such a way as to be able to discriminate between the entities — endowed with mass, form or position, or even more abstract — that make them up. It is worth noting here that this very process of forming classes is not unrelated to the importance of catastrophe typology, for example. And this is the direction in which we should be looking:

I no longer disagree with what you say in your last letter; I am willing to believe in the existence of a constraint due to a hierarchical structure of metabolism as a whole. But I believe that this structure - if it is "molecularly" realized - also has a continuous formal origin, linked to the "a priori" opposites of regulation. Alongside the constraint of spatial localization, there is also the constraint of chemical kinetics due to its functional efficiency and regulation.

It was this very research program that motivated my exploration of an authentic link between information and the biological objects of metabolism. This has led me to identify agents that behave like Maxwell's demons, and which, by discriminating between classes of objects, will generate precisely the “animation” factor that René Thom was looking for! More precisely, in the smallest genomes, at least one tenth of the genes encode functions of this type, and these are functions which, because they involve hitherto unknown behavior, have long remained « unknowns ». Thom probably would not have liked the idea of “agent”, but abstract principles have to be embodied in hard matter. And what is important here is the generalization made possible by the abstract conception of their role. How do they behave? The function in question must, if the classes are not ambiguous, implement a process that avoids the mistake of putting together objects that need to be distinguished. It is not a question of recognizing them, but of not mixing them up, so that their subsequent destiny differs. The aim of discrimination is to divide entities into a sequence of different space-time events (different destinies). The function of discrimination therefore does not stop at a given element, but always involves several elements that are different from one another. This is the essential difference with the enzyme-specific recognition/identification hated by René Thom. Something will be the object of a measurement, but the discriminating agent will associate this measurement with a particular action, the construction of classes of entities destined to follow subsequent destinies of a different nature..

Since we are talking about classes rather than isolated individuals, the entities in question can accommodate a certain variation in their characteristics. They generally have no reason to have a specific, well-defined value, but rather belong to a space of values: we see here a process of generalization of the type desired by Thom. This means that the discrimination algorithm is very different from the recognition algorithm. The characters involved are not independent of each other, and their combination has no reason to be additive. Typically, if such-and-such a character has such-and-such a value (within particular limits), then the presence of such-and-such another character is expected within limits that will depend on the value of the first character (this is Markovian, with all possible generalizations). Furthermore, the process tolerates a certain level of contradiction. If a majority of characters correspond to the class, it is possible to tolerate the presence of characters that do not satisfy the majority rule. Discrimination is a purely informational property, requiring the memorization of a character in order to classify objects that possess that character and distinguish them from those that do not. This is exactly what a Maxwell demon does.

However, the discrimination process is inseparable from the mechanism by which it is implemented in time and space. We need to study it, and then describe it in detail. In the course of its implementation, the discriminating entity will make a succession of choices according to what it is measuring, in relation to a spatio-temporal memory which, unlike recognition, has no reason to be static (since it concerns classes, i.e. objects presenting a subset of properties belonging to a larger set). The nature and origin of this memory will have to be analyzed in depth. Typically, this is a generally large, but finite, set of objects whose very existence results from the process of evolution by natural selection.

In reality — after all, we need to understand the cell — the process of discrimination can affect the discriminated object as it unfolds, since this object is defined as belonging to a class and therefore does not present a strictly unique set of characteristics. This is where a mechanical dimension of Physics can come into play (the discrimination process can “deform” the object during the interaction that decides whether it belongs to a class), which explains the difficulty we have in distinguishing energy dissipation linked to the discrimination process (discussed later) from that linked to mechanics (which deforms the object, for example). Defining a discrimination process therefore implies describing a sequence of events involving a dynamic series of interactions. It is therefore reasonable to speak of a discriminating “agent”. However, this makes our analysis of the physico-chemical processes involved difficult because, once again, it can be coupled to mechanical characteristics, involving mechanical forces. The consequence of this remark is that it will sometimes be difficult in experiments to distinguish the energy involved in information manipulation from that involved in mechanical actions.

This explains why we have summoned the idea of Maxwell's demon, which was first characterized by a mechanical description (movement of a separating trapdoor between two compartments). This way of looking at things also explicitly implies the implementation of a quantity of energy that will be used by the agent to distinguish the elements it classifies, and which then enters into a specific destiny for each class. Again, this is a very different process from recognition, which is purely passive, acting as a “gateway” to a destiny independent of the process itself (transport, catalysis, start of regulation...), and, for René Thom, of no conceptual interest. In the biological process of discrimination, we therefore expect a source of energy to be involved, the use of which follows a succession of stages.

Let us imagine the case of two classes. The discriminating agent is pre-loaded with an energy source, “ready to fire”, but with a safety catch. It encounters an unknown object and must decide to which class it belongs. 1/ First case. A series of interactions with the object leaves it unchanged, leading to the end of the process, leaving the discriminating agent loaded with his energy source and safety catch unchanged. 2/ Second case. A first series of interactions modifies the discriminating agent-object pair, leading to a second series of interactions. The safety catch is lifted. The object is then treated as an element of a class different from that of the first case, and is directed towards a particular metabolic, spatial or temporal path which will decide of its future. After this stage, the discriminating agent must return to its basic state, dissipating the energy with which it was equipped and then reloading a fresh energy source while resetting its safety catch.

All this involves a number of steps well observed in characterized biological agents (typically: activation of an energy-rich bond, breaking of the bond into at least two elements, ejection of at least one of the elements, then resetting with ejection of what remains and replacement by a new energy source). An illustration of this way of using energy to manipulate information is the sorting process that allows only young proteins to occupy the bud of a new cell in yeast, while aged or damaged proteins end up in the mother cell. The dynamics that lead to the identification of entities in a class are slow, whereas the dissipation of energy to reset them to zero is rapid. Would not this have inspired René Thom?

There is one final point I would have liked to submit to his sagacity: understanding the growth of cells (rather than the macroscopic embryos that were dear to his heart) brings to light a real question of morphology. Amusingly, it was a kind of anecdote that opened up the corresponding general question. Here it is. The most abstract principles have to be embodied in the physical reality of the world. This requires the involvement of objects whose nature is contingent, but which take the necessary place to enable the concrete realization of a general principle. The process of discrimination that enables the construction of classes dissipates energy. Till this point, this description remains highly abstract. Yet the objects that carry the energy loaded on the discriminating agents must be authentic chemical compounds that act as energy stores. These compounds are always the same, whatever the discrimination function. However, experiments have revealed a notable and totally unexpected exception. One of the storing devices of this energy to be dissipated, when the cell must decide to organize its chromosomes during growth and distribute them among the daughter cells, does indeed include the classic energy store, but its chemical nature differs from the usual ones. It is a chemically different store from the omnipresent ones.

Why on earth this particularity? Here, of course, we have to go into detail to understand, and going into detail would have horrified René Thom. However, it reveals a general question that I would have loved to put to him. It is a question involving the geometry of Euclidean space, which constrains us all. One of the key properties of living organisms is the fact that cells grow, before giving rise to new cells of the same type. A cell is a three-dimensional entity. It grows by combining the input and output of elementary building blocks. In a constant state of growth, the construction and combination of these blocks takes place in three dimensions, in the machinery of the cytoplasm, which expands as it does. But this leads to physical constraints that evolution has had to take into account to harmonize this growth. In particular, the cell membrane is two-dimensional, and the cell's genome, even more serious, is a thread, and therefore one-dimensional. There is “too much” of the building blocks needed to construct a membrane, and even more a genome. So there is a paradox between the three-dimensional synthesis of precursors, and the non-homothetic growth of membranes and the genome. The latter, as we know, tends to occupy the volume of the cell in much the same way as a Peano structure. Understanding the solutions found in the course of evolution could have appealed to René Thom. I will leave him with the last word:

Alongside the constraint of spatial localization, there is also the constraint of chemical kinetics linked to its functional and regulatory efficiency. Cuvier and Geoffroy Saint Hilaire had already seen this very well in 1830, and I am delighted that contemporary biologists are rediscovering it. I have no problem with the fact that the hysteresis of molecular structures can determine "plastic" and relatively contingent details such as zebra stripes in zebras or panthers. It is a certain isomorphism between the dynamics of the whole organism, and the dynamism of the cell, which explains why certain global structures can be "coded" molecularly in the cell (DNA or nuclear proteins…)