## Towards a Theory of the Organism

Mae-Wan Ho

Bioelectrodynamics Laboratory, Open University, Walton Hall, Milton Keynes, MK7 6AA, U.K.

Abstract

A tentative theory of the organism is derived from McClare’s (1971) notion of stored energy and Denbigh’s (1951) thermodynamics of the steady state, as a dynamically closed, energetically self-sufficient domain of cyclic non-dissipative processes coupled to irreversible dissipative processes. This effectively frees the organism from thermodynamic constraints so that it is poised for rapid, specific intercommunication, enabling it to function as a coherent whole. In the ideal, the organism is a quantum superposition of coherent activities over all space-time domains, with instantaneous (nonlocal) noiseless intercommunication throughout the system. Evidence for quantum coherence is considered and reviewed.

1Introduction

Organisms are so enigmatic from the physical, thermodynamic point of view that Lord Kelvin, co-inventor of the second law of thermodynamics, specifically excluded them from its dominion (Ehrenberg, 1967). As distinct from heat engines, which require a constant input of heat energy in order to do work, organisms are able to work without a constant energy supply, and moreover, can mobiise energy at will, whenever and wherever required, and in a perfectly coordinated way. Similarly, Schr”dinger (1944) was impressed with the ability of organisms to develop and evolve as a coherent whole, and in the direction of increasing organization, in defiance of the second law. He suggested that they feed upon “negative entropy” to free themselves from all the entropy they cannot help producing. The intuition of both physicists is that energy and living organization are intimately linked. Schr”dinger was reprimanded, by Linus Pauling and others, for using the term “negative entropy”, which does not correspond to any rigorous thermo-dynamic entity (Gnaiger, 1994). However, the idea that open systems can “self-organize” under energy flow became more concrete in the discovery of dissipative structures (Prigogine, 1967) that depend on the flow and dissipation of energy, such as the B‚nard convection cells and the laser (Haken, 1977). In both cases, energy input results in a phase transition to global dynamic order in which all the molecules or atoms in the system move coherently. From these and other considerations, I have identified Schr”dinger’s “negative entropy” as “stored mobilizable energy in a space-time structured system” (Ho, 1993, 1994b, 1995a), which begins to offer a possible solution to the enigma of living organization. In this paper, I outline a theory of the organism as a dynamically and energetically closed domain of cyclic non-dissipative processes coupled to irreversible dissipative processes. This effectively frees the organism from thermodynamic constraints so that it is poised for rapid, specific intercommunication, enabling it to function as a coherent whole. In the ideal, the organism is a quantum superposition of coherent activities over all space-time domains, with instantaneous (nonlocal) noiseless intercommunication throughout the system. 2 Stored mobilizable energy The concept of stored energy in this paper derives from McClare (1971), who attempted to formulate the second law of thermodynamics so that it can apply, not only to ensembles of molecules as is conventionally the case, but also to a single molecule. He was motivated to do so because organisms are by no means large ensembles of identical molecules. Instead, a cell typically has one or two molecules of DNA, and a few molecules of specific ligands binding to receptors on its membrane are sufficient to initiate a cascade of increasingly macroscopic changes. Furthermore, all energy transductions in the living system are dependent on enzymes and other proteins functioning individually as ‘molecular energy machines’, transferring energy directly from the point of release to the point of utilization. McClare introduced the notion of a characteristic time interval, t, for a system at temperature q, which partitions the energies in the system into stored versus thermal energies. Thermal energies are those that exchange with each other and reach equilibrium in a time less than t , so technically they give the so-called Boltzmann distribution characterized by the temperature q. Stored energies are those that remain in a non-equilibrium distribution for a time greater than t, either as characterized by a higher temperature, or such that states of higher energy are more populated than states of lower energy. So, stored energy is any form that does not thermaise, or degrade into heat in the interval t. McClare then restated the second law as follows: useful work is only done by a molecular system when one form of stored energy is converted into another. In other words, thermaised energy is unavailable for work and it is impossible to convert thermaised energy into stored energy. The above restatement of the second law seems unnecessarily restrictive, and possibly untrue, for thermaised energy from an enzyme molecule embedded in a membrane, or a matrix such as the cytoskeleton, is likely to cause conformational changes in neighbouring enzyme molecules, resulting in further work being done. Thermaised energy from burning coal or petrol is routinely used to run generators and motor cars. However, they do so against an external constraint, such as a piston, which, in taking up the thermaised energy, is in a position to do work against the system external to the combustion chamber. This suggests that ‘the system’ must be more explicitly defined in relationship to the extent of equilibration. A more adequate restatement of McClare’s second law, which improves on my previous attempts (Ho, 1994, 1995a), might be as follows: Useful work is done by molecules by a direct transfer of stored energy, and thermaised energy cannot be converted into stored energy within the same system, the system being the extent over which thermal and other rapidly exchanging energies equilibrate. The first half of the formulation, much as McClare has proposed, is new and significant for biology. The second half of the statement, which I have modified, introduces the concept of a ‘system’ defined as the extent to which thermaised and other rapidly exchanging energies equilibrate. It allows for the possibility that thermaised energies from one (sub)system can do work in a larger encompassing system for which the thermaised and other energies are in a nonequilibrium distribution. This is highly relevant for living systems (Ho, 1993) which are now known to have a nested dynamic organization of compartments and microcompartments down to the interior of cells and organelles (Welch and Clegg, 1987) where single molecular energy machines may cycle autonomously without equilibrating with its environment. The major consequence of McClare’s ideas arises from the explicit introduction of time, and hence time-structure. For there are now two quite distinct ways of doing useful work at maximum efficiency, not only slowly according to conventional thermodynamic theory, but also quickly – both of which are reversible as no entropy is generated. This is implicit in the classical formulation, dS30, for which the limiting case is dS=0, the attention to time-structure making much more precise what the limiting conditions are. A slow process is one that occurs at or near equilibrium. A reversible thermodynamic process merely needs to be slow enough for all thermal, or other exchanging energies to equilibrate, ie, slower than t, which can in reality be a very short period of time, for processes that have short time constants. Thus, for a process that takes place in 10-12s, a microsecond (10-6s) is an eternity. So high efficiencies of energy conversion can still be attained in thermodynamic processes which occur quite rapidly, provided that equilibration is fast enough. Compartmentation and microcompartmentation effectively restrict the volume within which equilibration occurs, thus reducing the equilibration time. Thus, the living system is both thermodynamically optimized in terms of efficiency of energy transformation and transfer, and kinetically optimized in terms of the speed with which reactions can occur (Ho, 1995a). At the other extreme, there can also be a process occurring so quickly that it, too, is reversible. In other words, provided the exchanging energies are not thermal energies in the first place, but remain stored, then the process is limited only by the speed of light. Resonant energy transfer between molecules is an example of a fast process. It occurs typically in 10-14s, whereas the molecular vibrations themselves die down, or thermaise, in 10-9s to 101s. McClare (1972) suggests that a form of resonant energy transfer may occur in muscle contraction, where it has been shown that the energy released in the hydrolysis of ATP can be almost completely converted into mechanical energy in a molecular machine that can cycle autonomously without equilibrating with its environment. Ultrafast, resonant energy transfer takes place between the light-trapping antenna complex and the reaction center of the photosynthetic system in the thylakoid membrane, and is also involved in the first step of the separation of positive and negative charges across the membrane (Fleming et al, 1988). McClare’s ideas have been taken up and developed by Blumenfeld (1983), and more recently, Welch and Kell (1985), among others (see many Chapters in Welch, 1986), particularly in the concept of nonequilibrium, ‘quantum molecular energy machines’, which is now increasingly accepted among protein biochemists and biophysicists. I suspect, however, that most molecular energy machines may be functioning in the quasi-equilibrium mode (see Ho, 1995a). I have generaised McClare’s notion of ‘characteristic time’ of energy storage to ‘characteristic space-time’ which captures the space-time differentiation of living processes more precisely. Stored energy, being capable of doing work, is also mobilizable energy or coherent energy. (Coherent energy comes and goes together so it can do work, as opposed to incoherent energy which goes in all directions and cancel itself out.) As the energy is stored over all space-times, so it is mobilizable all over the system. Stored energy is really a more precise formulation of the usual “free energy” which has no space-time characterization. Detailed arguments for energy storage in living systems is presented elsewhere (Ho, 1993; 1995a). 3Energy storage frees the organism from thermodynamic constraints 3.1Energy storage and mobilization in living systems The key to understanding the thermodynamics of the living system is neither energy flow nor energy dissipation, but energy storage under energy flow (Fig. 1). Energy flow is of no consequence unless the energy is trapped and stored within the system where it circulates to do work before being dissipated. A reproducing life cycle, i.e., an organism, arises when the loop of circulating energy closes. At that point, we have a life cycle within which the stored energy is mobiised, remaining stored as it is mobiised, and coupled to the energy flow. Figure 1 here Energy storage depends on the highly differentiated space-time structure of the life cycle, whose predominant modes are themselves cycles of different sizes, spanning many order of magnitudes of space-times, all coupled together, and feeding off the one-way energy flow (Ho, 1993; 1994; 1995a). The more cycles there are, the more energy is stored, and the longer it takes for the energy to dissipate. The average residence time of energy (see Morowitz, 1968) is therefore a measure of the organized complexity of the system. An intuitive representation is given in Figure 2. Figure 2 here Coupled processes are familiar in biochemistry: practically all thermodynamically uphill reactions (DG positive) are coupled to the thermodynamically downhill ones (DG negative). The ATP/ADP couple, ubiquitous is to the living system, effectively turns all biosynthetic and other energy requiring uphill- reactions downhill (c.f. Harold, 1986). Life is literally downhill, or effortless, all the way (Ho, 1995a). That living processes are organized in cycles is also intuitively obvious by a casual examination of the metabolic chart. Apart from the prominent cycles such as the tricarboxylic acid cycle and the cyclic interconversion of ATP/ADP, NADH/NAD and other redox intermediates, many more cycles and epicycles are entangled in the metabolic network. Another prominent way in which cycles appear is in the familiar form of the wide spectrum of biological rhythms – with periods ranging from milliseconds for electrical discharges of single cells to circadian and circa-annual cycles in whole organisms and populations of organisms (Breithaupt, 1989; Ho, 1993). These cycles interlock to give the organism a complex, multidimensional, entangled space-time, very far removed from the simple, linear Newtonian space and time (Ho, 1993; 1994b). That these rhythms are indeed entangled is indicated by the remarkable observations that mutations in two genes of Drosophila, period and timeless, which speed up, slow down or abolish circadian rhythm, also cause corresponding changes in the millisecond wing beat cycle of the male fly’s love song (see Zeng, et al, 1996). This correlation spans seven orders of magnitude of characteristic timescales, reflecting the full extent of storage and mobilization of energy in the living system. Energy is stored and mobiised over all space-times according to the relaxation times and volumes of the processes involved. The result, as mentioned above, is that organisms can take advantage of two different ways to mobiise energy with maximum efficiency – nonequilbrium transfer in which stored energy is transferred before it is thermaised, and quasi-equilibrium transfer, for which the free energy change approaches zero according to conventional thermodynamic considerations. As all the space-time modes are coupled together, energy input into any mode can be readily delocaised over all modes, and conversely, energy from all modes can become concentrated into any mode. In other words, energy coupling in the living system is symmetrical (see Ho, 1993, 1994b, 1995a,b) as argued in more detail below. 3.2Symmetrical coupling of cyclical flows Symmetrical energy coupling and cyclical flows are both key aspects of the living system that are actually predicted from the thermodynamics of the steady state, in the form, respectively, of Onsager’s reciprocity relationship (see Denbigh, 1951 for an accessible exposition), and of Morowitz’ (1968) theorem. Onsager’s reciprocity relationship is well-known. It states that for a system of many coupled linear flows under conjugate forces, Ji = SkLikXk(1) where Ji is the flow of the ith process (i = 1, 2, 3…..n), Xk is the kth thermodynamic force (k = 1, 2, 3,…..n), and Lik are the proportionality coefficients (where i = k) and coupling coefficients (where i _ k), the couplings for which the Xks are invariant with time reversal (i.e., velocity reversal) will be symmetrical; in other words, Lik = Lki(2) so long as the Js and the Xs satisfy Tq = SJiXi where q is the rate of entropy increase per unit volume (I thank Denbigh (personal communication) for this formulation). Morowitz’ (1968) theorem, much less known, states that the flow of energy through the system from a source to a sink will lead to at least one cycle in the system at steady state, provided that the energy is trapped and stored within the system (italics mine). This important theorem is, as far as I know, the only attempt to account for cycles in the living system, it implies that the steady state – at which global balance is maintained – must harbour nonlinear processes (see Ho, 1993). I present a shortened version of Morowitz’ proof below. For a canonical ensemble of systems at equilibrium with i possible states, where fi is the fraction of systems in state i (also referred to as occupation numbers of the state i), and tij is the transition probability that a system in state i will change to state j in unit time. The principle of microscopic reversibility requires that every forward transition is balanced in detail by its reverse transition, ie, fi tij = fj tji(3) If the equilibrium system is now irradiated by a constant flux of electromagnetic radiation such that there is net absorption of photons by the system, i.e., the system is capable of storing energy, a steady state will be reached at which there is a flow of heat out into the reservoir (sink) equal to the flux of electromagnetic energy into the system. At this point, there will be a different set of occupation numbers and transition probabilities, fi’ and tij’; for there are now both radiation induced transitions as well as the random thermally induced transitions characteristic of the previous equilibrium state. This means that for some pairs of states i and j, fi’tij’ _ fj’tji'(4) For, if the equality holds in all pairs of states, it must imply that for every transition involving the absorption of photons, a reverse transition will take place involving the radiation of the photon such that there is no net absorption of electromagnetic radiation by the system. This contradicts the original assumption that there is absorption of radiant energy (see previous paragraph), so we must conclude that the equality of forward and reverse transitions do not hold for some pairs of states. However, at steady state, the occupation numbers (or the concentrations of chemical species) are time independent (ie, they remain constant), which means that the sum of all forward transitions is equal to the sum of all backward transitions, ie, dfi’/ dt = 0 = S (fi’tij’ – fj’tji’) (5) But it has already been established that some fi’tij’ – fi’tji’ are non-zero. That means other pairs must also be non-zero to compensate. In other words, members of the ensemble must leave some states by one path and return by other paths, which constitutes a cycle. Hence, in steady state systems, the flow of energy through the system from a source to a sink will lead to at least one cycle in the system. Morowitz’ theorem also implies that the steady state necessarily violates the principle of microscopic reversibility, which, as Onsager originally argued, is a principle extraneous even to thermodynamic equilibrium (see Denbigh, 1951). Onsager’s reciprocity relationship has been extended to the far from equilibrium regime by Rothschild et al (1980) for multi-enzyme systems and more recently, by Sewell (1991) for infinite quantum systems. However, the validity and the theoretical basis for the extension of Onsager’s reciprocity relationship to biological systems are still under debate (Westerhof and van Dam, 1987). I believe some form of Onsager’s reciprocity relationship does hold in living systems if only to account for the ready mobilization of energy on the one hand – why we can have energy at will – and on the other hand, for the linear relationships between steady-state flows and conjugate thermodynamic forces outside the range of equilibrium, which is actually observed in many biological systems (Berry et al, 1987, and references therein). According to Rothschild et al (1980), linearity in biological processes can arise in enzymes operating near a multidimensional inflection point far away from thermodynamic equilibrium, if some of the rate constants are linked. That is realistic for living systems which are now known to have highly organized flows in the cytoplasmic matrix due to dynamic compartmentation and microcompartmentation (Welch, 1985, and references therein). In common with Rothschild et al (1981), Sewell shows how Onsager’s reciprocity relationship applies to locally linearized combinations of forces and flows, which nonetheless behave globally in nonlinear fashion. Again, that is relevant for the living system, where nested compartments and microcompartments ensure that many processes can operate locally at thermodynamic equilibrium even though the system or subsystem as a whole is far away from equilibrium (Ho, 1995a). Furthermore, as each process is ultimately connected to every other in the metabolic net through catenations of space and time, even if truly symmetrical couplings are locaised to a limited number of metabolic/energy transducing junctions, the effects will eventually be shared or delocaised throughout the system, so that symmetry will apply to appropriate combinations of forces and flows (Sewell, 1991)over a sufficiently macroscopic space-time scale . That is perhaps the most important consideration. As real processes take time, Onsager’s reciprocity relationship cannot be true for an arbitrarily short instant, but must apply at a sufficiently macroscopic time interval when overall balance holds. 3.3Thermodynamics of the steady state vs thermodynamics of organized complexity Denbigh (1951) defines the steady state as one in which “the macroscopic parameters, such as temperature, pressure and composition, have time independent values at every point of the system, despite the occurrence of a dissipative process.” (p.3) That is too restrictive to apply to the living system, which, as mentioned earlier, has coupled processes spanning the whole gamut of relaxation times and volumes. A less restrictive formulation – one consistent with a “thermodynamics of organized complexity” – might be to define the living system, to first approximation, as a dynamic equilibrium in which the macroscopic parameters, such as temperature, pressure and composition have time-independent values despite the occurrence of dissipative processes (see Ho, 1993, 1994a, 1996a) The present formulation omits the phrase, “at every point of the system” on grounds that microscopic homogeneity is not crucial for the formulation of any thermodynamic state, as the thermodynamic parameters are macroscopic entities quite independent of the microscopic interpretation (Ho, 1993). Like the principle of microscopic reversibility, it is extraneous to the phenomenological laws of thermodynamics as Denbigh (1951) himself has convincingly argued. The first incursion into the thermodynamics of the steady state was W. Thomson’s (Lord Kelvin) treatment of the thermoelectric effect (see Denbigh, 1951). This involves a circuit in which heat is absorbed and rejected at two junctions (the Peltier heat), and in addition, heat is absorbed and given off due to current flows between two parts of the same metal at different temperatures (the Thomson heat). Both of these heat effects are reversible, in that they change sign but remain the same in magnitude when the direction of the current is reversed. On the other hand, there are two other effects which are not reversible: heat conduction along the wires and dissipation due to the resistance. It is thus impossible to devise a reversible thermoelectric circuit even in principle. Nevertheless, Thomson took the step of assuming that, at steady state, those heat effects that are reversible, i.e., the Peltier heat and Thomson heat balance each other so that no net entropy is generated, DSp + DST = 0 (6) On that basis, he derived the well-known relations between the Peltier and Thomson heats and the temperature coefficient of the electromotive force. It was a bold new departure in the application of the Second Law, but one which was subsequently justified by experimental evidence. Very similar methods were used later by Helmholtz in his treatment of the electromotive force and transport in the concentration cell, where he states clearly that the two irreversible process in the cell, heating and diffusion, are to be disregarded and the second law to be applied to those parts of the total process which are reversible. Most modern accounts of this system follow the same procedure. A virtual flow of current is supposed to take place across the liquid junction, resulting in a displacement of the ions. The process is taken to be reversible and to generate no net entropy. The justification, according to Guggenheim (cited in Denbigh, 1951), is that the two processes, diffusion and flow of current across the junction, “take place at rates which vary according to different laws” when the composition gradient across the boundary is altered, and so it seems reasonable to suppose that the two processes are merely superposed, and that the one may be ignored when considering the other. Thus, the steady state is treated as if there were no dissipative processes, and it is this assumption which is later validated by Onsager’s reciprocity relationship. 3.4 The organism is a superposition of cyclic non-dissipative processes coupled to dissipative processes In the same spirit, I propose to treat the living system as a superposition of non- dissipative processes and dissipative irreversible processes, so that Onsager’s reciprocity relationship applies only to the former. In other words, it applies to coupled processes for which the net entropy production is balanced or zero, Sk DSk = 0(7) This will include most living processes because of the ubiquity of coupled cycles, for which the net entropy production balances out to zero. The principle applies, in fact, to the smallest unit cycle in the living system – enzyme catalysis – on which all energy transduction in the living system is absolutely dependent. Over the past 30 years, Lumry and his coworkers (see Lumry, 1991) have shown convincingly how the flexible enzyme molecule balances out entropy with enthalpy to conserve free energy (i.e., stored or coherent energy in the present context) during catalysis, in accordance with the relationship for isothermal processes, DG = DH – TDS = 0(8) The organism is, in effect, a closed, self-sufficient energetic domain of cyclic non- dissipative processes coupled to irreversible dissipative processes (Ho, 1995b). In the formalism of conventional thermodynamics, the life cycle, or more precisely, the living system in dynamic equilibrium, consists of all cyclic processes for which the net entropy change is zero, coupled to dissipative processes necessary to keep it going, for which the net entropy change is greater than zero (Fig. 3). Figure 3 here In other words, there is an internal entropy compensation as well as coherent energy conservation due to the predominance of coupled cyclic processes and the nest space- time organization of the processes. 3.5The principle of internal entropy compensation implies the principle of minimum entropy production Prigogine derived a theorem of minimum entropy production (see Glandorff and Prigogine, 1971), which states that entropy exported from a system reaches a minimum, or becomes zero, at thermodynamic equilibrium and at steady states close to thermodynamic equilibrium. The theorem is a direct consequence of Onsager’s reciprocity relationship which holds at steady states close to thermodynamic equilibrium. The principle of internal entropy compensation proposed here is in addition to, and implies the principle of minimum entropy production, and may even be valid in regimes far from thermodynamic equlibrium. Prigogine’s theorem of minimum entropy production was derived for homogeneous systems where all volume elements are uniform and locally at equilibrium. On the contrary, internal entropy compensation applies to systems with organized heterogeneity – such as organisms – so that positive entropy production in some space-time elements may be compensated by negative entropy production in other elements. Alternatively, positive entropy flows in some directions may be compensated by negative entropy flows in other directions, or else some form of enthalpy-entropy compensation could take place, as mentioned above, so that coherent energy is conserved. The system could be arbitrarily far from equilibrium, so long as, at some sufficiently macroscopic spacetime of interest, overall balance is attained, and the net entropy production of the system either vanishes or reaches a minimum. The internal balance of entropy production means that the system maintains its organized heterogeneity or dynamic order. It is in turn dependent on energy flow being symmetrically coupled, and cyclically closed over the system as a whole. This is the same as the argument presented earlier for the validity of Onsager’s reciprocity relationship in systems far from thermodynamic equilibrium. While most current thermodynamical analyses ignore space-time structure, the “thermodynamics of organized complexity” applying to living systems (Ho, 1993) is dependent on space-time heterogeneity, which allows ‘free’ variation of microscopic states within macroscopic constraints. Thus, stability criteria which apply to the system as a whole need not be satisfied in individual space-time elements. Each element may be defined by the extent of equilibration according to the characteristic timescale of its constituent process(s), and so the local equilbrium assumption can still be satisfied. But each space-time element need not be in equilibrium with other elements. 3.6Consequences of dynamic closure The dynamic closure of the living system has a number of important consequences. First and foremost, it frees the organism from the immediate constraints of energy conservation – the first law – as well as the second law of thermodynamics, thus offering a solution to the enigma of the organism posed by Lord Kelvin and Schr”dinger. There is always energy available within the system, for it is stored and mobiised at close to maximum efficiency over all space-time domains. The other consequences are that, the organism is also free from mechanical constraints, and satisfies, at least, some of the basic conditions for quantum coherence. I shall deal with these in the Sections following. The present formulation converges formally with several other representation of living organization: Maturana and Varela’s (1987) concept of life as autopoesis – a unitary, self-producing entity; Eigen and Schuster’s (1977) hypercycle of RNA- directed protein synthesis, in turn directing RNA polymerization; and Kauffman’s (1993) catalytic closure of polypeptide formation in the origin of life. However, unlike the present formulation, none of the previous representations is based explicitly on physical, thermodynamic principles, which offer new and important physical insights into the living system. 4The exquisite sensitivity of the organism that is free from mechanistic constraints One of the hallmarks of the living system is that it is exquisitely sensitivity to specific, weak signals. For example, the eye can detect single photons falling on the retina, where the light sensitive cell sends out an action potential that represents a million- fold amplification of the energy in the photon. Similarly, a few molecules of pheromones in the air is sufficient to attract male insects to their mates. No part of the system has to be pushed or pulled into action, nor be subjected to mechanical regulation and control. Instead, coordinated action of all the parts depends on rapid intercommunication throughout the system. The organism is a system of “excitable media” (see Goodwin, 1994,1995), or excitable cells and tissues poised to respond specifically and disproportionately to weak signals, because the large amount of energy stored everywhere automatically amplifies weak signals, often into macroscopic actions. As mentioned earlier, stored energy is coherent energy capable of doing work. The organism, therefore, is a highly coherent domain possessing a full range of coherence times and coherence volumes of energy storage. In the ideal, it can be regarded as a quantum superposition of coherent space-time modes. 5The coherence of organisms The ultimate problem of living organization is to account for the irreducible wholeness of the organism, which, as Needham (1936) states, encompasses the activities of elementary particles and atoms, of molecules and cells, tissues and organs, up to the organism itself, and beyond. The problem has never been adequately addressed until Fr”hlich (1968; 1980) presented the first detailed theory of coherence of the organism. He argued that as organisms are made up of strongly dipolar molecules packed rather densely together, they approach the ‘solid state’, where electric and elastic forces will constantly interact. Metabolic pumping will excite macromolecules such as proteins and nucleic acids as well as cellular membranes, which typically have an enormous electric field of some 107V/m across them. These will start to vibrate and eventually build up into collective modes, or coherent excitations, of both phonons and photons extending over macroscopic distances within, and perhaps also outside, the organism. Coherent excitations are possible precisely because the system does not dissipate its energy immediately, but stores it and circulates it among the different modes in the system, as described in the previous Section. The dynamic, energetic closure of the living system, together with the ‘solid-state’ nature of organisms, do provide the conditions for coherent excitations (see Ho, 1993; 1995b), and the closest analogy is the solid state laser. There, the reflective cavity is the closure required, and continued input of energy beyond the laser threshold will maintain the lasing action or coherent excitation of the emitting atoms. The closure itself is significant in that it enables the creation of a macroscopic quantum system (Leggett, 1986) with effectively a single degree of freedom, in other words, a quantum coherent domain. Such a system possesses a Hamiltonian and can therefore be represented in terms of a macroscopic wave function (c.f. Fr”hlich and Hyland, 1995). 5.1Quantum coherence in living organisms I have presented detailed heuristic arguments elsewhere on why the wholeness of organisms is to be understood as quantum coherence (Ho, 1993; 1995b). First, there is increasingly compelling evidence that the organisms and in particular cells are organized to approach the ‘solid state’ (or, more accurately, the liquid crystalline state, as I shall describe later one) in which much of the cell water is structured on the large amount of surfaces available in the “microtrabecular matrix” that fill up the so-called cytosol (see Clegg, 1985). That, plus the dynamic and energetic closure of the living system argued above, would seem to me to provide both the necessary and sufficient conditions for coherent excitations to occur, rather as suggested. Second, the predominant interactions in the solid state organism, as in any solid state, are electric and electromagnetic, and necessarily so, for those are the only ways in which molecules interact and form the various hierarchies of supramolecular assemblies that make up living organisms themselves. Third, living organisms depend on quantum reactions, not only in the sense that quantum tunneling is explicitly recognized in electron and proton transfer, but especially in the sense that all energy transductions are carried out by individual enzyme and other molecules acting as “quantum energy machines” in which individual quanta of energy released are directly transferred from the point of release to the point of use (McClare, 1971; see also Ho, 1993; 1995a). The coordination of such activities requires nothing short of quantum coherence, especially in view of the rapidity and specificity with which responses or intercommunication can take place in the living system. By far the most persuasive argument, to my mind, is the nature of the coordination that is achieved in the organism, where every single part in this magnificently diverse and pluralistic multiplicity, down to an individual quantum molecular energy machine, seems to be able to work autonomously while keeping in step and in tune with the whole. So perfectly do all the parts work together that, as Schr”dinger (1944) has remarked, we never experience ourselves as the multiplicity that we are, but always in the singular “I”. That requires no other than the factorizability of the quantum coherent state, which I shall explain below. 5.2The factorability of quantum coherence A quantum coherent state is a pure state – a state of oneness – that has the property of factorizability. This can be understood by considering Young’s two-slit experiment (Fig. 4) in which a source of monochromatic light is placed behind a screen with two narrow slits. As is well-known, light behaves as either particles or waves according as to whether one or both slits are open. When both slits are open, even single photons behave as waves in that they seem to pass through both slits at once, and, falling upon the photographic plate, produce a pattern which indicates that each photon, in effect, interferes with itself. Figure 4 here The intensity or brightness of the pattern at each point depends on the sum of four correlation functions: I = G(t,t) + G (b,b) + G(t,b) + G (b,t)(9) where G(t,t) is the intensity with only the top slit opened, G(b,b) the intensity with only the bottom slit opened, and G(t,b)+G(b,t) = 2G(t,b) is the additional intensity (which take on both positive and negative values) when both slits are opened. At different points on the photographic plate, the intensity is I = G(t,t) + G(b,b) + 2|G(t,b)|cosq(10) where q is the angle of the phase difference between the two light waves. The fringe contrast in the interference pattern depends on the magnitude of G(t,b). If this correlation function vanishes, it means that the light beams coming out of t and b are uncorrelated; and if there is no correlation, we say that the light at t and b are incoherent. On the other hand, increase in coherence results in an increase in fringe contrast, i.e., the brightness of the bands. As cosq is never greater than one (i.e., when the two beams are perfectly in phase), then the fringe contrast is maximized by making G(t,b) as large as possible and that signifies maximum coherence. But there is an upper bound to how large G(t,b) can be. It is given by the Schwarz inequality: G(t,t,)G(b,b) 3 |G(t,b)|2 The maximum of G(t,b) is obviously obtained when the two sides are equal: G(t,t)G(b,b) = |G(t,b)|2(11) Now, it is this equation that gives us a description of quantum coherence. A field is coherent at two space-time points, say, t and b, if the above equation is true. Furthermore, we have a coherent field if this equality holds for all space-time points, X1 and X2. This coherence is called first-order coherence because its refers to correlation between two space-time points, and we write it more generally as, G(1)(X1, X1)G(1)(X2, X2) = |G(1)(X1, X2|2(12) The above equation tells us that the correlation between two space-time points in a coherent field factorizes, or decomposes neatly into the product of the self-correlations at the two points separately, and that this factorizability is both a necessary and a sufficient condition for quantum coherence. Factorizability does not mean that the pure state can be factorized into a mixture of states, but it does lead to something quite unusual – any two points in a coherent field are correlated but they will still behave statistically independently of each other. If we put two photon detectors in this field, they will register photons independently of each other. It is the direct consequence of how perfectly they are correlated! Coherence can be generaised to arbitrarily higher orders, say, to m approaching _, in which case, we shall be talking about a fully coherent field. If mth order coherence holds, then all of the correlation functions which represent joint counting rates for n- fold coincidence experiments (where mP>

# Latest Articles

Motivational Inspirational Speaker

Motivational, inspirational, empowering compelling 'infotainment' which leaves the audience amazed, mesmerized, motivated, enthusiastic, revitalised and with a much improved positive mental attitude, state of mind & self-belief.