SUMMARY OF AN EXPLORATION IN BIOLOGY
Lectures and courses (excerpts)
Creation of the HKU-Pasteur Research Centre Ltd, Hong Kong
Creation of AMAbiotics SAS, a biotechnology company
Curriculum vitae (excerpts)
From genome sequencing to synthetic biology: natural selection is an authentic principle of physics
The progeny of iving organisms evolves. This implies that they either recruit or create novel information. Information is physical, it is a central currency of Reality. In 1961 Rolf Landauer established that computation is reversible, with the consequence that creation of information does not require energy. In contrast, resetting the process required to create information needed to erase the memory of past events. This is energy-costly. Charles Bennett, in 1988, gave flesh to this demonstration with a concrete example in arithmetics. As a contribution to the field of information theories the work described here proposes an experimental setup that links these features of information with biological processes. Bennett illustrated reversible computation by showing how to build up a simple arithmetic operation, division. He stated that the outcome of the division is obtained when one erases the intermediary states, leaving the rest of the division as the prominent "valuable" outcome of the computation. In this process, memory erasure consumes energy. Yet, Bennett did not explore how the rest of the division could be told from the remaining bits that needed to be erased. To make the choice, one needs some sort of complementary (contextual) information: we conjecture that this is essentially where the energy goes in. Energy is used to prevent degradation of the rest of the divison, while erasing the remaining memory. This resets the memory in a state that can be used for further computation. We need to identify a concrete process telling us what should be retained and what should be erased. To this aim we need to identify processes that would act as Maxwell's demons.
This brief summary of the Landauer / Bennett theorem and quest for Maxwell's demons led us to remember an observation we made when analysing "persistent" genes in genomes, i.e. genes that tend to be retained in most autonomous organisms. There is about 500 of them. The surprise was that this is twice as many as the number of genes deemed "essential", i.e. genes that cannot be omitted without immediate loss of viability. In the extra genes were also "metabolic patches" which can be accounted for as we account for "patches" in algorithms, and a set of several tens of genes which coded for degradation systems which, somehow, require energy to be functional. This was very puzzling as, usually, in biological systems, degradation produces energy (these reactions are exothermic and are used to fuel life processes) rather than use it!
Taking both observations together we conjecture that energy is not used to degrade, but to prevent degradation of "valuable" entities. This would solve Bennett's riddle. Also, in living organisms, this provides a remarkable way to cope with the problem of ageing. The degradation systems destroy aged entities. This makes room to replace them by young ones. How is this performed? The genetic program is used, and it is expressed (through the genetic code, for example) into the original entities provided it has not mutated during ageing. Also, if, for whatever reason, another source of the program has entered the cell in the mean time, that codes for an entity which can play the role of the original one, it will not be destroyed, but retained. Hence the system works as a context-dependent information trap....
Preliminary: Whole genome sequencing and annotation, in vivo and in silico genome analysis, discovery of paralogous metabolism
In 1986, we decided to explore the possibility of sequencing a whole bacterial genome to try and understand the basic principles of its construction. The idea was to explore the coupling between coordination of gene expression and the physical organisation of the genome. After a complex set of political events, impossible to summarise here (see Why sequence genomes? The Escherichia coli imbroglio and The Delphic Boat) we were eventually involved in the sequencing of a large segment of the Bacillus subtilis genome and, together with the late Frank Kunst, in the scientific co-ordination of genome sequencing for this organism. This led us to try and organise bioinformatics in France with the help of several colleagues at Universities, and national research agencies, through the creation of a nation-wide group, GDR 1029 (1991-1995) and subsequently through the coordination of the bioinformatics programme of the Groupement de Recherche et d'Etudes des Genomes (1992-1996, headed by Piotr Slonimski), then at the Comité de Coordination des Sciences du Vivant (1998-2000). As the director of the Department Genomes and Genetics at the Institut Pasteur until june 2009, we put an final point to the project by re-sequencing and re-annotating afresh the sequence of the reference genome of B. subtilis, as a tribute to the whole international community working on this model organism. This endeavour was renewed in 2013, and we hope to be able to update the annotation on a yearly basis.
The Delphic Boat or, what do genomes tell us
In his Lives of Illustrious Men, Plutarch described the return of Theseus—whose relationship with the temple of Apollo at Delphi is well known, hence my Delphic Boat—from Crete to Athens, and the fate of his ship made by the Athenians. To keep the ship operational the Athenians had to replace the rotting boards with new boards. And philosophers subsequently used this example to discuss permanence and change, some claiming that the ship was no longer the same, while others said the opposite: "The ship wherein Theseus and the youth of Athens returned had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their place, insomuch that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same." (translated by John Dryden).
Following the trend shaped by this deep question, the study of life should never be restricted to the study of objects, but must study their relationships. This is why genomes can certainly not be considered simply as collections of genes. They are much more. How can we have access to this information? Considering the current flow of genome sequences that are published two contrasting images emerge: at first sight genes appear to be distributed randomly along the chromosome. In contrast their organisation into operons (or pathogenicity islands) suggests that, at least locally, related functions are in physical proximity. In order to try to understand genome organisation, we must therefore explore the distribution of genes along the chromosome, but we should do this by generalising the concept of neighbourhood to many more types of vicinities than the mere succession of genes in the genomic text.
The first observations of our laboratories (Regulation of Gene Expression and Genetics of Bacterial Genomes) suggested that this order was far from random, but was linked to the function of genes, in relation with the cell's architecture. These results were fragmentary, so they needed to be experimentally validated, combining in silico analysis of the genome (bioinformatics) of model organisms, such as Escherichia coli or Bacillus subtilis, with their study in vivo (reverse genetics and physiological biochemistry, in particular using transcription expression profiling and two-dimensions protein electrophoresis), and comparative studies with other genomes, with biochemical and structural analyses. If indeed the map of the cell is in the chromosome, this asked for some physical principle linking the succession of the genes - a symbolic text, carrying an information - and the cell's architecture - concrete matter. As we do not need the existence of a divine principle, this should be the consequence of a simple physical principle. The winning trio of darwinian natural selection (variation / selection / amplification) shows that evolution creates functions, that functions "capture" (recruit) structures (acquisitive evolution), so that structural analysis only becomes important when functions are understood.
The simplest way to evolve is to follow the arrow of time, to increase the overall entropy of the system. In water, this is indeed the driving force for the construction of many a biological structure: this is at the root of the universal formation of helices, this allows the folding of proteins and the formation of viral capsids. And it should not escape our attention that the largest increase in entropy of a molecular complex in water occurs when the ratio surface / volume is the highest: when a planar structure is formed it orders the water molecules on both its faces. As a consequence, if this plane meets another one, it will loose one layer of water molecules, and stick there. Formation of planar layers should therefore be a very strong organising principle. Is it possible to find out, just knowing the genomic text whether a gene product will form such layers, whether it simply forms hexagons, for example? This is even more unlikely than that an amino-acid sequence could tell us exactly the fold of a protein, without knowing pre-existing folds: pancreatic RNase would fold indeed, because selection isolated it for that (it is secreted in bile salts), but this would never be accepted as the paradigm of protein folding.
However, in silico analysis allows us to organise knowledge, and this might be a way to proceed in the future. To generate new knowledge, why not explore neighborhoods of biological objects, considering genes as starting points, stressing that each object exists in relation to other objects. Inductive exploration will consist in finding all neighbors of each given gene. "Neighbour" has here the largest possible meaning. This is not simply a geometrical or structural notion. Each neighbourhood is meant to shed specific light on a gene, looking for its function as bringing together the objects of the neighbourhood. A natural neighborhood is proximity on the chromosome: operons or pathogenicity islands show that genes neighbors from each other can be functionally related. Another interesting neighborhood is similarity between genes or gene products. The isoelectric point often gives a first idea of a gene product compartmentalisation. Also, a gene may have been studied by scientists in laboratories all over the world. And it can display features that refer to other genes: its neighbors will be the genes found together with it in the literature. Finally, there exists more complex neighborhoods, the study of which gives particularly revealing results: two genes may be neighbors because they use the genetic code in the same way. One can also study all genes that belong to the same neighborhood in the cloud of points describing codon usage of all the genes of the organism.
From the methodological standpoint this requires construction of neighbourhoods tables (conveniently available to scientists in databases: a field of choice for bioinformatics). Finally, systematic investigation of history will identify literature neighbourhoods, not only using title and abstracts, but the whole content of articles: "in biblio" analysis is an essential component of inductive reasoning. We do not possess heuristics permitting direct access to unknown functions, and apart from preliminary studies there does not exist many places where such in silico work is developed. There exists however an excellent illustration of the concept of neighbourhood, the software Entrez, created by David Lipman and colleagues at the NCBI.
All this has some flavour of a once fashionable field, Artificial Intelligence, a highly contentious but fascinating domain! This should also make clear to us that in silico analysis will never replace validation in vivo and in vitro: let us hope that propagation of erroneous assignments of functions by automatic interpretation of the genomic texts will not hinder discoveries. Knowing genome sequences is a marvelous feat, but it is the starting point, not the end.
Before extracting some of the contribution to general knowledge of work developed for several decades, let us recall the last word of The Delphic Boat, to try and prevent misunderstandings. We are well aware that, in contrast to Art, Science should have no names.
The central question that we explored is the following. Is it possible to uncover rules that would account for the fact that genes function as a whole in the cell and contribute to its consistent and reproducible development? When one tries to isolate some of the important trends of this research, one produces a picture that culminates in what can be considered as "symplectic biology", a biology where the relationships between objects is of more conceptual importance than the objects themselves. As this becomes understood, the idea that it will be possible to reconstruct life, and even to construct material objects endowed of living properties, based on building blocks that differ from those which exist in present day living organisms gains ground. Synthetic Biology is no longer a dream, it is in the process of being a novel achievement.
To answer this very general question, a genetic selection and screening procedure in the model bacterium Escherichia coli was meant to isolate mutants that would orient his future experiments along a rewarding track. The idea was to explore whether some signals which appear to us as redundant (i.e. look somewhat "useless" for the unprepared human mind) in macromolecular syntheses could be separated (i.e. by selecting mutants that would grow with only one active signal instead of several). The idea was that there exists some "secundary punctuation" in the expression of the genetic message allowing coupling between macromolecular syntheses and the bulk metabolism of the cell. Emphasis on this linguistic analogy came from his contribution to the reflection on the role of selective processes at the root of memory and learning. The study of the process of initiation of translation, which, in Bacteria, associates two independent signals (a metabolic signal which labels the first methionine of the nascent polypeptide with a one-carbon residue, and the structure of a special transfer RNA) led, through experiments using genetics, to the discovery of a ubiquitous anomaly in metabolism, coupling replication, transcription, translation and cell division. The mutants affected in that process were analysed in succession. They involved transcription termination, translation initiation, the “stringent” coupling between these processes, the one-carbon metabolism, synthesis of cyclic AMP, a protein long proposed to be a bacterial histone, H-NS, and the biosynthesis pathway of branched-chain amino-acids. This apparently haphazard list, derived from the outcome of genetic experiments, accounts for the threads followed, one by one, to attempt to unravel this complicated network of interactions, finally understood in january 2006 with the role of the serine amino acid (this common amino acid is toxic in excess because of at least two processes: production of hydroxypyruvate, that makes dead-end products with thiamine, and of aminoacrylate / iminopropionate when it enters pathways such as cysteine and tryptophan biosynthesis). Mid-1980 the time was now ripe to explore this same question not through the study of individual genes, but rather to develop a global study of the genes from the knowledge of the complete genome texts. This required introduction of a large component of computer sciences, and experiments “in silico” were proposed to complement in vivo or in vitro experiments (this term was used for the first time in 1988-1989, in discussions with the European commission, meant to justify the setting up of genome projects). The question then became a simple conjecture, based on a former reflection of von Neumann about Turing machines: is there a link between the architecture of the cell and that of the genome? Work from the Unit Genetics of Bacterial Genomes showed that indeed genes are not randomly distributed in genomes. Whether this indicates a link with the architecture of the cell remains however, of course, an open question.
• Discovery of toxic adenylate cyclases (whooping cough and anthrax), discovery and molecular characterisation of four independent classes of adenylate cyclases (evolutionary convergence), creation of the international classification of adenylyl cyclases, 1988-1998
The involvement of cyclic AMP in the "serine effect" (wild type strains are sensitive to serine, but cya and crp mutants are more resistant) led to a thorough study both in terms of genetics and biochemistry of adenylate cyclases. After having been the first laboratory to isolate and characterise in full the gene of an adenylate cyclase (that of Escherichia coli), the work was extended to the identification of adenylate cyclase toxins, present in the etiologic agents of whooping cough and anthrax. Having invented a multipartner cloning technique, the ancestor of the technique now known as “double hybrid” (patent EP0301954), the genes from the corresponding toxins were isolated and sequenced, the proteins analysed biochemically and the secretion process of the cyclases was characterised:
P Glaser, D Ladant, O Sezer, F Pichot, A Ullmann, A Danchin
P Glaser, H Sakamoto, J Bellalou, A Ullmann, A Danchin
A symmetrical approach was used to clone the cDNA of mammalian calmodulins, showing that the method (double hybrid) is of wide efficiency:
As early as 1988, this work asked a series of ethical problems (recently revived under the name of “bioterrorism”) discussed in:
An overview of this first work on adenylate cyclases is summarised in:
This article creates the international reference for the classification of adenylate cyclases. Initially, three classes from different phylogenetic descent (convergent evolution) were identified: Class I, cyclases from enterobacteria and related bacteria; Class II, secreted toxic cyclases; Class III, "universal" class present in Bacteria and in Eukarya (including higher vertebrates). A fourth class, also from a completely different phylogenetic origin was discovered several years later in the Unit:
O Sismeiro, P Trotot, F Biville, C Vivarès,
The "universal" cyclases class (class III) clusters together adenylate and guanylyl cyclases, and an original selection procedure allows one to go from one type of specificity to the other one (this was one of the very first experiments showing that it is possible to change the specificity of an enzyme for its substrate):
• Discovery of the unexpectedly large extent of horizontal gene transfer (HGT) in bacteria, 1991-1999
Genome studies implied the creation of a global in silico analysis of the genome texts. A first analysis of 800 genes from E. coli allowed their clustering into three major classes: core metabolism, genes expressed at a high level under rapid growth, and genes coming from outside…
This very early work of genomics in silico demonstrated for the first time that a large fraction (at least one sixth) of the genes of E. coli are derived from horizontal gene transfer. It also shows that antimutator genes are likely to be propagated by horizontal gene transfer, suggesting that bacteria in the environment are often in a highly mutable state, which is fixed in a much more rigid (invariable) form when they meet a stable biotope. Another observation from this study is the clustering of HGT genes in relation with particular cell processes, suggesting that genomes are organised entities:
P Guerdoux-Jamet, A Hénaut, P Nitschké,
JL Risler, A Danchin
The fact that this observation is general would be demonstrated later on, in the case of Bacillus subtilis. The importance of HGT is so well accepted nowadays that it has become common knowledge in biology:
• Discovery of the massive presence of genes of unknown function in genomes, 1991; first sequencing and annotation of the genome of a Firmicute, 1997; multiple genome programmes and ongoing re-sequencing and re-annotation of B. subtilis
The setting up of the sequencing of the genome of Bacillus subtilis, first project of this type launched for conceptual and not technological reasons, was publicly proposed at the beginning of 1987. This resulted, in parallel with the same result obtained by the consortium sequencing the genome of Saccharomyces cerevisiae, in the first significant discovery of genomics, that found that many genes were completely unknown, not only in their sequence but also in their function and in the structure of their product:
Glaser, F Kunst, M Arnaud, M-P Coudart, W Gonzales, M-F Hullo, M Ionescu,
B Lubochinsky, L Marcelino, I Moszer, E Presecan, M Santana, E Schneider,
J Schweizer, A Vertes, G Rapoport, A Danchin
This article shows, for the first time, that in a long DNA fragment sequenced in full, half of the genes did not look like anything known until then. This utterly unexpected result (the opponents to genome sequencing projects had "demonstrated" that we knew at least 95% of all possible gene classes and published this demonstration in the most fashionable journals), presented with a similar conclusion from the sequencing of the yeast's chromosome III, at the first genomics symposium organised by the commission of European Communities in Elounda in Crete in 1991, revealed the first major discovery obtained by genome projects.
Performed by a consortium associating Europe and Japan, the sequencing of the B. subtilis genome was completed in 1997, at the same time as that of E. coli. As early as 1995 the total length of continuous fragments from the organism was significantly larger than that of the genomes then sequenced by Craig Venter and his colleagues. This was not much noticed however: Science has now become an activity in the domain of show business and advertisement. However this genome remained for five years the only example of its domain (the genomes of the Firmicutes are particularly difficult to sequence, because their DNA is usually toxic in the universal host used to construct DNA libraries, E. coli, for biochemical reasons well understood by the authors of this project) :
Kunst, N Ogasawara, I Moszer, AM Albertini, G Alloni, V Azevedo, MG
Bertero, P Bessières, A Bolotin, S Borchert, R Borriss, L Boursier,
A Brans, M Braun, SC Brignell, S Bron, S Brouillet, CV Bruschi, B Caldwell,
V Capuano, NM Carter, SK Choi, JJ Codani, IF Connerton, NJ Cummings,
RA Daniel, F Denizot, KM Devine, A Düsterhöft, SD Ehrlich,
PT Emmerson, KD Entian, J Errington, C Fabret, E Ferrari, D Foulger,
C Fritz, M Fujita, Y Fujita, S Fuma, A Galizzi, N Galleron, SY Ghim,
P Glaser, A Goffeau, EJ Golightly, G Grandi, G Guiseppi, BJ Guy, K Haga,
J Haiech, CR Harwood, A Hénaut, H Hilbert, S Holsappel, S Hosono,
MF Hullo, M Itaya, L Jones, B Joris, D Karamata, Y Kasahara, M Klaerr-Blanchard,
C Klein, Y Kobayashi, P Koetter, G Koningstein, S Krogh, M Kumano, K
Kurita, A Lapidus, S Lardinois, J Lauber, V Lazarevic, SM Lee, A Levine,
H Liu, S Masuda, C Mauël, C Médigue, N Medina, RP Mellado,
M Mizuno, D Moesti, S Nakai, M Noback, D Noone, M O'Reilly, K Ogawa,
A Ogiwara, B Oudega, SH Park, V Parro, TM Pohl, D Portetelle, S Porwollik,
AM Prescott, E Presecan, P Pujic, B purnelle, G Rapoport, M Rey, S
Reynolds, M Rieger, C Rivolta, E Rocha, B Roche, M Rose, Y Sadaie,
T Sato, E Scalan, S Schleich, R Schroeter, F Scoffone, J Sekiguchi,
A Sekowska, SJ Seror, P Serror, BS Shin, B Soldo, A Sorokin, E Tacconi,
T Takagi, H Takahashi, K Takemaru, M Takeuchi, A Tamakoshi, T Tanaka,
P Terpstra, A Tognoni, V Tosato, S Uchiyama, M Vandenbol, F Vannier,
A Vassarotti, A Viari, R Wambutt, E Wedler, T Weitzenegger, P Winters,
A Wipat, H Yamamoto, K Yamane, K Yasumoto, K Yata, K Yoshida, HF Yoshikawa,
E Zumstein, H Yoshikawa, A Danchin
The length of this genome, 4 megabases, represented more than the total length of what The Institute for Genome Research, TIGR, with its well chosen name, had already sequenced. It was also, with the genome of E. coli which has a comparable length, the longest sequence of a known DNA fragment until that date.
V Barbe, S Cruveiller,
F Kunst, P Lenoble, G Meurice, A Sekowska, D Vallenet, TZ Wang, I
Moszer, C Médigue, A Danchin
E Belda E, A Sekowska, F Le Fèvre, A Morgat, D
Mornico, C Ouzounis, D Vallenet, C Médigue, A Danchin
The distribution of the corresponding sequence and annotations to the international community was displayed in the form of a specialised database with no exact counterpart until now:
Several genome projects followed: Leptospira interrogans and Staphylococcus epidermidis, in collaboration with the Shanghai Genome Center, Photorhabdus luminescens, at the Institut Pasteur, and, to try and understand the impact of the temperature constraints on genomes, the genome of the Antarctica bacteria Pseudoalteromonas haloplanktis TAC125, in collaboration with the Genoscope and several universities in the world. Sequencing of the genome of Psychromonas ingrahamii followed as a collaboration with Monica Riley and her colleagues. Within a few years, technological progresses both in vitro and in silico have been so extraordinary that this last project asked, in terms of workforce, one hundred times less person/years that that of B. subtilis. Environmental microbiology was also explored, via sequencing the genome of Herminiimonas arsenicoxydans.
E Duchaud, C Rusniok, L Frangeul, C Buchrieser,
A Givaudan, S Taourit, S Bocs, C Boursaux-Eude, M Chandler, JF Charles,
E Dassa, R Derose, S Derzelle, G Freyssinet, S Gaudriault, C Médigue,
A Lanois, K Powell, P Siguier, R Vincent, V Wingate, M Zouine, P Glaser,
N Boemare, A Danchin, F Kunst
Médigue, E Krin, G Pascal, V Barbe, A Bernsel, PN Bertin, F
Cheung, S Cruveiller, S D'Amico, A Duilio, G Fang, G Feller, C Ho,
S Mangenot, G Marino, J Nilsson, E Parrilli, EPC Rocha, Z Rouy, A Sekowska,
ML Tutino, D Vallenet, G von Heijne, A Danchin
The corresponding data (sequence and annotations) has been organised, together with the counterpart from genomes of bacteria interesting for medicine or environment, at the University of Hong Kong:
• The first laws of bacterial genomes organisation 1999-present
Can one uncover rules in genome organisation? Our effort discovered several laws: first, there is a universal bias in the composition of the genes present in the leading and the lagging strand of DNA; second, and this is quite remarkable, the essential genes (experimentally identified after the sequecing project of B. subtilis) are specifically coded in the leading DNA strand. Furthermore, the nature of DNA polymerase III has a role in the global organisation of the genome. Firmicutes, that have two such polymerases (DnaE and PolC) display a strong bias in gene distribution. Analysis of genes co-evolving with these polymerases shows that different bacterial clades have different origins. This has a consequence of considerable importance for the question of the origin of life, as this shows that there is no single ancestor, no LUCA, but a population of progenotes that fused and split repeatedly before giving birth to the species we know today.
Considering genomes as wholes, one knew for more than a decade that there exists a 10-11.5 period in the nucleotide distribution, and this is true from prokaryotes to eukaryotes. This bias is present throughout a given genome, both in coding and non-coding sequences. Using a technique for analysis of auto-correlations based on linear projection the sequences responsible for the bias were identified. These ubiquitous patterns were termed "class A flexible patterns". Each pattern is composed of up to ten conserved nucleotides or dinucleotides distributed into a discontinuous motif. Each occurrence spans a region up to 50 bp in length. There is some limited fluctuation in the distances between the nucleotides composing each occurrence of a given pattern, suggesting that they are constrained by DNA supercoiling and/or bending. When taken together, these patterns cover up to half of the genome in the majority of prokaryotes. They generate the previously recognized 11 bp periodic bias. Judging from the structure of the patterns, it was suggested that they may define a dense network of protein interaction sites in chromosomes:
The corresponding constraints are visible in the amino-acid sequence of the proteins, suggesting that the sequence is more constrained by the genome organisation than by the protein function. These novel observations have considerable implications in terms of phylogenetic profiles when one analyses protein sequences:
This latter work characterises “orphan” proteins which form approximately 10% of any genome of a new species. These proteins are characterized by their enrichment in aromatic amino-acids. This work proposes that many among the represent the "self" of the species, by behaving as “gluons” which bring about an extra contribution is the stability of multiprotein complexes in the cell. This would bring an essential contribution to the functional stabilisation of complex intracellular structures. More generally the approach thus defined allowed the investigators to define the essentiality of a gene in a real context, by measuring its persistence in many species, not only in sequence but also in its place in the genome:
In summary, it appears that bacterial genomes are highly organised entities, contrary to a widely spread idea of a random "fluidity" of genomes. What are the selective constraints that support this organisation?
A general analysis of the conservation of syntenies in a large number of complete bacterial genomes has shown that two classes of genes tend to stay together. The way the class of persistent genes keep remaining grouped is organized in a way that is reminiscent of a scenario of the origin of life. This is why the corresponding set has been named the paleome. In the same way, genes that are rarely found in genomes make clusters that are easily horizontally transferred. The corresponding genes allow the bacteria to live in a specific niche. They are named, for this reason, the cenome (to indicate the fact that they are shared by a community living in a particular environment, and prone to be transferred):
CG Acevedo-Rocha, G Fang, M Schmidt, DW Ussery,
• Metabolism and meta-metabolism
The functional organisation of the genes in genomes must result from the selection pressure of simple physico-chemical principles. Beside physical causes such as the structure of water (the study of the genome of P. haloplanktis is meant to have access to some of those), gasses and radicals, because they are highly diffusible, may play a major role in cellular compartmentalisation, and might be the cause of some of the organisation of the genes in genomes. Sulfur metabolism is particularly sensitive to gasses and radicals, and it is therefore important to understand how it is organised. A first study demonstrated that sulfur-related genes are organised into islands:
and a detailed analysis, mainly developed during the creation of the HKU-Pasteur Research Centre in Hong Kong permitted them to uncover the details of the “methionine salvage pathway”:
A Sekowska, JY Coppée,
JP Le Caer, I Martin-Verstraete, A Danchin
The following work makes a synthesis of the catalytic activities involved in this ubiquitous cycle (it is also present in humans and plants), which has the interesting feature that it systematically recruited proteins of diverse structures to lead to the completion of the cycle. One of these proteins is likely to be related to the ancestor of ribulose-phosphate carboxylase/oxygenase (RuBisCO), the most abundant enzyme on the planet (this opens fascinating questions on the origin of catalytic activities):
Ashida, A Danchin, A Yokota
This remarkable metabolic cycle has the surprising property as shown in this work, under particular conditions, to lead the cell to synthesize carbon monoxide. As this cycle exists in humans, this opens interesting perspective about possible controls mediated by CO, a gas different from nitric oxide, in the immune system and in the nervous systemx.
• Selective stabilisation and epigenesis
This work explored the role of selective stabilisation in learning and memory in the nervous system and in the immune system, opening concepts for later work in genomics.
Danchin, JP Changeux
The question is now to try and understand how the future of daughter cells is organised at the time of cell divison, and what are the main selective stabilisation processes at work...
• Synthetic Biology
A Danchin, A Sekowska
In his seventeenth-century classic, Novum Organum, Francis Bacon wrote, “we cannot command nature except by obeying her” (Bacon, 2010). Although our knowledge of living systems is much improved since Bacon’s time, we are still far from understanding—or commanding—all the complex mechanisms of life. To take full advantage of living organisms for the benefit of mankind, we will need to understand those mechanisms to the furthest possible extent. To do so will require that the concept of information and the theories of information science take a more-prominent role in the understanding of living systems...
A Danchin, PM Binder, S Noria
M Porcar, A Danchin, V de Lorenzo,
VA dos Santos, N Krasnogor, S Rasmussen, A Moya
CG Acevedo-Rocha, G Fang, M Schmidt,
DW Ussery, A Danchin
This work is summarised in more than 500 articles (350 referenced at PubMed, and 440 at the ISI) and four books in Molecular Biology and Genetics:
Ordre et Dynamique du Vivant. Chemins de la Biologie Moléculaire - Le Seuil, 1978.
and more recently on the Origin of Life:
A book on the revolution of Genomics, published in 1998 by Odile Jacob:
La Barque de Delphes. Ce que révèle le texte des génomes - Odile Jacob -1998
Updated and translated into English in 2003:
|<< A Presocratic view of Biology • Une vision Présocratique de la Biologie>>|