We are searching data for your request:
Upon completion, a link will appear to access the found materials.
As a part of my inorganic chem. course (it's a required course at my college), we have a module called Introduction to Bioinorganic Chemistry. There, the prof. showed examples of enzymes like Cytochrome P450 and the drug erythromycin, and circled the active portions.
From what I can see, the active portions are a tiny portion of the actual drug/enzyme, so my question is: why do enzymes (and drugs, for that matter) have huge structures surrounding them?
I tried Googling my question but couldn't find anything relevant to what I was looking for. I suspect that I wasn't using the correct terminology, so the search engine had no idea what I wanted to know.
Simple explanations are preferred, preferably from the high-school level biochemistry side. I've never taken a biology course in school or college. I am a math major and this inorganic chemistry course is a required one. However, I still want to know why the amino acid chains are required, as they don't seem to be doing anything (that's what the active site is for.)
Picture for reference:
Download and print this article for your personal scholarly, research, and educational use.
Buy a single issue of Science for just $15 USD.
Vol 369, Issue 6502
24 July 2020
Please log in to add an alert for this article.
By William P. Russ , Matteo Figliuzzi , Christian Stocker , Pierre Barrat-Charlaix , Michael Socolich , Peter Kast , Donald Hilvert , Remi Monasson , Simona Cocco , Martin Weigt , Rama Ranganathan
Science 24 Jul 2020 : 440-445
An evolution-based, data-driven engineering process can build artificial functional enzymes.
Urate oxidase, which is also known as uricase, is an important enzyme in purine degradation. It catalyzes uric acid degradation and produces allantoin (Fig 1), which is much more soluble than uric acid . Urate oxidase is present in various organisms such as animals, plants, fungi, yeasts, and bacteria . However, during the evolutionary process, urate oxidase activity seems to have been lost in some higher primates including humans [3, 4]. In these species, the end product of purine metabolism is uric acid rather than allantoin .
Uric acid exists in the human body mainly as free acid and urate salt, both of which are insoluble in water. The accumulation of uric acid in the body can result in hyperuricemia. In some cases, its crystallization and precipitation lead to gout . During chemotherapy of leukemia and lymphoma, the excretion of uric acid increased sharply because of the degradation of nucleic acid from malignant cells. The excessive uric acid can obstruct the renal tubules and cause acute renal failure (“tumor lysis syndrome”) .
Allopurinol is a drug used to treat gout. It inhibits xanthine oxidase and blocks the formation of uric acid. However, it increases the precursors of uric acid, hypoxanthine and xanthine, which increase the burden on the kidneys. Direct injection of urate oxidase can rapidly degrade uric acid without the accumulation of any additional intermediate products . Urate oxidase coupled with the 4-aminoantipyrine peroxidase system, is also used as a reagent in the clinical diagnostic kit .
For clinical applications, it is highly desirable to increase the activity of urate oxidase. Hence, it is necessary to improve the expression and activity of the enzyme by all possible means . There have been reports of urate oxidase purification from fungi, yeast, and bacteria. Large-scale production from these organisms is difficult because of the low expression level and low stability of the enzyme. These difficulties have been overcome by heterologous expression of urate oxidase. Rasburicase, a clinical drug, is a recombinant urate oxidase from Aspergillus flavus . Urate oxidase from Candida utilis and Pseudomonas aeruginosa has also been heterologous expressed [7, 9]. Directed evolution is a powerful tool to improve the catalytic activity of an enzyme. In addition, the mutants obtained by directed evolution often provide some information about the catalytic mechanism.
In this study, we conducted several rounds of mutagenesis coupled with staggered extension process and screened the mutants in Escherichia coli, to improve the catalytic activity of Bacillus subtilis urate oxidase (BSUO). Several mutants with improved activities were obtained and characterized. To explore the sequence structure-function relationship of these mutants, a series of single point mutations were constructed and analyzed. A homology model using Bacillus sp. TB-90 urate oxidase (PDB: 5ayj) as template (its sequence identity to BSUO is 66.78%) has been constructed to guide our discussions.
The effect of distant mutations on the catalytic reaction of dihydrofolate reductase (DHFR) is reexamined by empirical valence bond simulations. The simulations reproduce for the first time the changes in the observed rate constants (without the use of adjustable parameters for this purpose) and show that the changes in activation barriers are strongly correlated with the corresponding changes in the reorganization energy. The preorganization of the polar groups of enzymes is the key catalytic factor, and anticatalytic mutations destroy this preorganization. Some anticatalytic mutations in DHFR also increase the distance between the donor and acceptor, but this effect is not directly related to catalysis since the native enzyme and the uncatalyzed reaction in water have similar average donor−acceptor distances. Insight into the effect of a mutation is provided by constructing the relevant free energy surfaces in terms of the generalized solute−solvent coordinates. It is shown how the mutations change the reaction coordinate and the activation barrier, and it is clarified that the corresponding changes do not reflect dynamical effects. It is also pointed out that all reactions in a condensed phase involve correlated motions (both in enzymes and in solution) and that the change of such motions upon mutations is a result of the change in the shape of the multidimensional reaction path on the solute−solvent surface, rather than the reason for the change in rate constant. Thus, as far as catalysis is concerned, the change in the activation barrier is due to the change in the electrostatic preorganization energy.
This work was supported by Grant GM024492 from the National Institutes of Health (NIH).
Corresponding author. E-mail: [email protected] Phone: 213-740-4114. Fax: 213-740-2701.
DISCUSSIONFig 5 Change of structure (e.g., salt bridge and hydrogen bond) around active sites of AmyKΔC500-587, AmyKΔC492-587, AmyKΔC500-587::OP, and AmyKΔC492-587::OP. The catalytic residues are shown in red “ball and stick” residues. The residues that formed hydrogen bond and salt bridge with catalytic residues are in green “sticks.” The salt bridge and the hydrogen bond were calculated by Discovery Studio 2.5. The dark striped line is the salt bridge. The orange striped line is the hydrogen bond. (A) Change of structure (e.g., salt bridge and hydrogen bond) around active sites of AmyKΔC500-587. (B) Change of structure (e.g., salt bridge and hydrogen bond) around active sites of AmyKΔC492-587. (C) Change of structure (e.g., salt bridge and hydrogen bond) around active sites of AmyKΔC500-587::OP. (D) Change of structure (e.g., salt bridge and hydrogen bond) around active sites of AmyKΔC492-587::OP. Fig 6 Change of the distance between catalytic sites and Met 247 for AmyK after C-terminal truncation at residue 500 or 492 and oligopeptide fusion at the N-terminal region. The red “sticks” are catalytic residues. The green “sticks” contain Met 247. The dark dashed line is the distance between catalytic residues and Met 247. (A) Distance between catalytic sites and Met 247 of AmyK. (B) Distance between catalytic sites and Met 247 of AmyKΔC500-587. (C) Distance between catalytic sites and Met 247 of AmyKΔC492-587. (D) Distance between catalytic sites and Met 247 of AmyKΔC500-587::OP. (E) The distance between catalytic sites and Met 247 of AmyKΔC492-587::OP.
Results and Discussion
The TIM-barrel fold is the most ubiquitous in nature (Wierenga 2001 ). All TIM-barrel active sites are defined by loop regions at the C-terminal end of the α/β barrel. Compared to the enzyme's core, active-site loops are hypermutable without affecting the integrity of the fold. Therefore, evolutionary selected mutations within the active-site loops largely depend on the mechanistic requisites of each reaction catalyzed. This architecture is a classic example of a molecular scaffold upon which a wide variety of enzymes can be based (for an excellent review, see Nagano et al. 2002 ). In fact, TIM barrels are known to span five of the six enzyme commission (EC) classifications. Despite global conservation of the active site at the C-terminal end of the barrel, the exact position and identity of the catalytic residue(s) are variable. Most TIM barrels are multimeric with large, well defined protein-protein interfaces. Frequently, the canonical TIM-barrel fold is interrupted by inserted domains that expand the catalytic possibilities of the enzyme. The enolase superfamily is one such example. In this case, a globular α+β N-terminal domain provides several additional substrate binding interactions. TIM-barrel sequences are not as conserved as one might expect, based on their remarkably similar fold topologies (Nagano et al. 2002 ). In fact, the similarity between most interfamily TIM-barrel proteins is firmly within the “twilight zone” (Chung and Subbiah 1996 ). The observed sequence similarity, or dissimilarity for that matter, has led to a debate regarding TIM-barrel evolution. Whether TIM barrels have resulted from convergent or divergent evolution remains an open question however, the general consensus (Reardon and Farber 1995 Copley and Bork 2000 ) is that TIM barrels are divergently evolved from some ancestral protein.
Triosephosphate isomerase (TIM) is the namesake of the TIM-barrel fold because it was the first example in which the fold was observed (Wierenga 2001 ). TIM is a ubiquitous glycolytic enzyme that interconverts dihydroxyacetone phosphate (DHAP) and glyceraldehyde-3-phosphate. Glu169 (using a common sequence alignment numbering scheme throughout) acts as a general base (Knowles 1991 ) that first abstracts a proton from the α-carbon of DHAP, and later abstracts a proton from the α-hydroxyl group of the enediol intermediate (Fig. 1). Polarization of the (α-, β-) carbonyl group, followed by stabilization of the oxyanion in the (forward, reverse) reaction by an oxyanion hole (Kursula et al. 2001 ) makes breaking the C-H bond energetically feasible. The residues responsible for polarization in the forward and reverse reaction are Lys11 and His97, and Asn9 and His97, respectively. Subtle rearrangements of the catalytic residues and substrate along the reaction pathway have been identified (Kursula et al. 2001 ). Further conformational changes occur within loop 6 of the protein (Joseph et al. 1990 Wierenga et al. 1992 Rozovsky and McDermott 2001 Rozovsky et al. 2001 ). On substrate binding, the flexible “lid” (loop 6) closes over the active site. Despite these conformational changes, the reaction catalyzed by TIM is very fast in fact it approaches the diffusion limit (Stroppolo et al. 2001 ). Our previous report (La et al. 2005 ) demonstrates that all electrostatic (H-bond and salt bridge) interactions between TIM and substrate, as well as the flexible “lid,” are identified as PMs. Furthermore, the best-scoring PM covers the entire Prosite (Hulo et al. 2004 ) definition of the family. In this report we demonstrate that PMs also identify most of the conserved electrostatic interactions that maintain the catalytic pKa value of Glu169.
The TIM family is largely composed of three distinct subfamilies (Fig. 2A). Sequence fragments with two or three positions (from a window width = 5) that approximate the complete tree are identified as PMs. The remaining positions within those fragments are generally well conserved, which leads to the observed similarity between traditional and phylogenetic motifs. Very few highly variable positions within high scoring windows are observed. Sequence logos (Crooks et al. 2004 ), shown in Figure 3, highlight this point. In addition to the catalytic Glu169, two of the three oxyanion hole residues correspond to PM residues these three residues are 100% conserved within the multiple sequence alignment. The third oxyanion hole residue (Asn9), which is conserved better than 90% in the sequence alignment, is immediately adjacent to the first PM (the structure alignment is shown in Fig. 4A). Because Asn9 is so well conserved, it fails to contribute any new phylogenetic information. This is why, in this case, the conserved position occurs just outside the identified PM. In other instances, conserved positions frequently occur within PMs because they are between tree-determinant positions.
As implied in Figure 1, a dynamic pKa of Glu169 is necessary for catalysis to occur. At the beginning of the reaction cycle, Glu169 must be deprotonated (i.e., a low pKa value) in order for it to act as a general base. However, if the pKa is too low, then it is unlikely it will be able to accept a proton. Next, Glu169 must give up its proton to form the enediol intermediate. This acid/base cycle is repeated in the second half of the mechanism, finally resulting in G3P formation. Calculated pKa values of the 12 apo and seven substrate-bound structures are provided in Table 1. The Glu169 pKa values in the apo structures can be clustered into two groups (one from subfamily #1 and one from subfamilies #2 and #3). The pKa of the P. woesei catalytic residue is significantly higher (2.37) than that of the remaining structures (−1 to +1). Despite the quantitative difference in pKa values, differences in the percent deprotonated (calculated using the Henderson-Hasselbach equation at optimal growth pH) are marginal. For example, the P. woesei ortholog is calculated to be 99.98% deprotonated, whereas the S. cerevisiae ortholog is 100% deprotonated, meaning that a negatively charged Glu169 is ensured at the beginning of each reaction cycle.
Eight PMs are identified within the TIM family (Table 2). Figure 4A provides the sequence alignment of the 12 TIM structures investigated the identified PMs are highlighted. Titrating residues calculated to be strongly (more than ±0.5 kcal/mol) electrostatically interacting with the catalytic Glu are also indicated. The electrostatic calculations identify four conserved stabilizing and four conserved destabilizing interactions. Seven of the eight interactions correspond to PM residues the remaining residue is weakly stabilizing. The average interaction energy of the three stabilizing PM interactions is −0.90, −1.95, and −0.70 kcal/mol, whereas the average of the non-PM interaction is −0.76 kcal/mol. Because the catalytic Glu is exposed on the surface of the protein (making desolvation effects irrelevant), the residues electrostatically interacting with it are uniquely responsible for keeping the pKa values so low. These interactions are generally conserved throughout all 12 structures. As expected, the most common and striking differences occur in the P. woesei ortholog. The observed differences between the P. woesei structure and the others are consistent with differences seen in the complete familial alignment. Figure 5A provides a structural representation of these results.
Taken together, the PM and electrostatic results indicate subtle evolutionary variability within the catalytic residues of TIM. Regardless of the observed pKa value differences, it is unlikely that catalysis and/or reaction rates are substantially affected, because Glu169 is essentially 100% deprotonated in all cases. However, such a low pKa value conflicts with the later steps of the reaction. In the second and fourth steps of the reaction, the Glu169 must become protonated. The calculated pKa values are so small that their ability to become protonated is negligible. This quandary is resolved upon substrate binding, which shifts all 12 Glu169 pKa values similarly to the above seven, making protonation feasible. Less subfamily discrimination is observed in the calculated pKa values of the substrate-bound structures. Therefore, the low pKa value within the apo structures, in spite of the observed variability, ensures that the catalytic Glu is completely deprotonated, which is necessary for the reaction to begin. On substrate binding, the pKa is raised such that protonation becomes energetically feasible. In the case of S. cerevisiae, the catalytic Glu goes from 100% deprotonated to 96.94% protonated. Although not explicitly modeled here, the conformational rearrangements within the active site (Kursula et al. 2001 ) are expected to continually shift the pKa values, as needed, throughout the reaction pathway.
The electrostatic interactions highlighted in Figures 4A and 5A are from the apo structures. It should be pointed out that quantitatively similar pairwise values are calculated for the substrate-bound structures. In fact, the correlation coefficient between corresponding Glu169:X pairs, where X equals all other residues, in the apo and substrate-bound structures is greater than 0.9. Due to the technical manner in which the multiple-site titration procedure calculates pKa values, this initially surprising result should actually be expected. First, a so-called intrinsic pKa is calculated that accounts for solvent accessibility and neutral dipoles (note: substrate binding does not appreciably affect Glu169 accessibility). Next, the apparent pKa is calculated from the intrinsic value plus all pairwise electrostatic effects. Therefore, the electrostatic interaction, Φ, between Glu169 and X is not influenced by the substrate because it is assumed to be neutral when ΦGlu169:X is calculated, meaning that ΦGlu169:substrate is the only significant effect leading to the large pKa shift of Glu169.
All-to-all phylogenetic comparisons of TIM sequence windows reveal interesting results (Fig. 6). As expected, high-similarity regions correspond to PMs, only this time they are identified without recourse to complete familial tree comparisons. This result highlights intrafamily co-evolution within functional portions of the protein. Of course, a robust evolutionary description of any family should include both PM and non-PM regions. Nevertheless, the importance of conservation of function in protein family evolution is confirmed once more. Further, many conserved functionally important electrostatic interactions correspond to high-similarity regions. For example, the interactions mediating the pKa value of Glu169 are in the most phylogenetically similar regions. Many other conserved electrostatic interactions do not correspond to PMs. However, most of these interactions are structural, not catalytic. Note: In this study we define catalytic residues as the ones involved in the discussed electrostatic networks.
Enolase, also a ubiquitous glycolytic enzyme, catalyzes the penultimate reaction of the pathway. Enolase catalyzes the reversible dehydration reaction converting 2-phosphoglycerate (2PG) to phosphenolpyruvate. As discussed above, enolase is a multidomain TIM-barrel protein. Both the TIM-barrel and N-terminal domains contribute active-site residues. Catalysis requires a general base to abstract the α-carbon proton from 2PG. In order for the reaction to proceed, several divalent metal ions (generally Mg 2+ or Mn 2+ ) are required at the active site (Wold and Ballou 1957 ), presumably to stabilize the carbanion intermediate. The catalytic residues of enolase have not been unequivocally determined however, the conserved Lys357 (again using alignment numbering) is a likely candidate (Babbitt et al. 1996 ). In the forward reaction, Glu216 is thought to provide a proton to the leaving hydroxyl group (Cohn et al. 1970 ). Several other conserved acids are also present at the active site. Experimental profiles for Mg 2+ activation led Vinarov and Nowak ( 1998 ) to refute the Lys357/Glu216 catalytic pair hypothesis. Their results suggest that Lys408 and His164 are the catalytic pair. Whether or not His164 is a catalytic residue, its functional importance is confirmed by the H164A mutation, which has 0.01% of wild-type activity (Vinarov and Nowak 1999 ).
Figure 2B indicates that the enolase family can be roughly divided into two subfamilies (plus two outlier sequences). Two structures per subfamily are currently available. Unlike TIM, where conserved subfamily differences in the pKa value of Glu169 are calculated, no clustering of the electrostatic properties is observed. Quantitative pKa values are highly protein-dependent and cannot be grouped based on subfamily. Rather, a large electrostatic network is conserved in all enolase structures, which results in qualitatively similar pKa values across the whole family. As before, many of the conserved electrostatic interactions that make up this functional network correspond to PMs.
Twelve enolase PMs are identified, which is roughly proportional to the number found in TIM after normalizing for alignment length. Based on our previous and ongoing studies (results not included), we have determined this to be a weakly consistent trend. Future large-scale analyses will attempt to quantify this qualitative observation. All four of the active-site residues discussed above are predicted as PM residues. The pKa values of Glu211, Lys357, and Lys408 are drastically shifted from their aqueous values (Table 3). The extent of the shifts highlights their functional importance (Elcock 2001 ). The extreme pKa values of these residues are stabilized by a conserved electrostatic network. Figures 4B and 5B highlight several conserved interactions within the network. A majority of the enolase electrostatic network residues are also identified as PMs, again confirming the evolutionary importance of catalytic electrostatic networks.
In many cells, a constant supply of biosynthetic reducing power (in the form of NADPH) is provided by the pentose phosphate pathway (Wood 1986 ). Additionally, the pentose phosphate pathway can provide ribose-5-phosphate for nucleic acid biosynthesis and several glycolytic intermediates. NADPH is provided in the first (oxidative) half of the pathway, whereas transaldolase (TA) and transketolase provide a reversible link to glycolysis (Jia et al. 1996 ) in the nonoxidative portion of the pathway. TA catalyzes a three-carbon transfer from sedoheptulose-7-phosphate (S7P) to glyceraldehydes-3-phosphate. The products of this TA-catalyzed reaction are fructose-6-phosphate and erythrose-4-phosphate. The TA mechanism involves a nucleophilic attack of a deprotonated Lys (Lys128 in our alignment) on the carbonyl carbon of S7P, forming a Schiff base intermediate (Jia et al. 1997 ). The calculated pKa value of Lys128 (11.65) is approximately the model value. However, the calculated pKa value indicates that the deprotonated form of Lys128 is negligible at physiological pH. Presumably, substrate binding lowers the pKa such that the nucleophilic attack can occur.
Unlike TIM and enolase, the catalytic Lys of TA is not predicted as a PM. While not identified as a phylogenetic motif residue, the evolutionary importance of the stretch of residues surrounding Lys128 is confirmed. Lys128 is in the middle of a traditional motif in fact, this stretch of residues around the catalytic residue is one of two Prosite (Hulo et al. 2004 ) definitions for the family. While the active-site motif does possess some variability, the variability is too random for a PM to be identified. The second TA Prosite definition is entirely covered by a PM. Conversely, most of the residues electrostatically interacting with Lys128 are identified as PMs. Seven PMs are identified in TA (Table 1). Five significant electrostatic interactions are calculated between Lys138 and the remaining residues (see Fig. 5C). The calculated ΦLys138:X are listed in Table 4. Four of the five interactions correspond to PMs, including Asp34, which is one of the two most stabilizing interactions calculated. Unlike many PMs, the Asp34 PM is not also a traditional motif. The same is true for the less stabilizing, yet significant Asp16 interaction. From these and other (data not shown) traditional versus phylogenetic motif comparisons, we conclude that future efforts attempting to predict functional interactions from sequence alone (compared to our goal here of simply demonstrating correspondence) should employ various sequence feature identification strategies. These results are in line with the recent review by Jones and Thornton ( 2004 ) that classifies functional-site prediction strategies into two groups, one based on sequence conservation and the second based on feature identification.
Structural clustering of TIM-barrel phylogenetic motifs
Juxtaposed to our specific sequence/structure/function comparisons above, we also identified PMs in five other TIM-barrel protein families. In all cases, PMs are structurally clustered around the active-site region (Fig. 7). For the most part, PMs are solvent-exposed. However, in cases where substrates intercalate deeper into the core, PMs correspond to more buried regions as well. In all cases, defining PMs as functional is consistent with the structural information. Interestingly, many secondary structure elements (vs. random coil loops) are included within the PMs. Many (68%) PMs partially span the C-terminal end of the β-strands. A few even span the entire β-strand. PMs spanning α-helical regions are less common, yet still occur appreciably (36%). Only 13% of the identified PMs are segregated to only coil regions. Across the data set, PMs partially span all eight β-strands. PMs are most common in strands β1, β6, and β7, occurring four times. Conversely, PMs are least common in strand β3, where they occur only twice.
The basic principle of protein evolution is conservation of function. Function is defined by the structural properties of a particular enzyme, which is encoded in sequence. We present the evolutionary variability within the sequence/ structure/function relationships of three important metabolic enzymes (triosephosphate isomerase, enolase, and transaldolase), all of which are TIM-barrel proteins. In the case of triosephosphate isomerase, quantitative differences in the pKa values of the catalytic Glu parallel subfamily differentiation. Further, phylogenetic motifs are shown to correspond to active-site electrostatic network residues within all three families. Finally, PMs are shown to structurally cluster around the active sites of eight different TIM-barrel families.
Cytochrome P450 flexibility
Cytochrome P450 refers to a group of enzymes that catalyze the monooxygenation of various organic molecules in the following reaction: P450s are best known for their role in drug metabolism and detoxification. Humans have 57 functional P450 genes and 46 pseudogenes (http://drnelson.utmem.edu/human.genecount.html), and several of these, located in liver microsomes, are induced by a variety of foreign organic molecules. Hydroxylation of these substances by P450s renders these relatively insoluble organic compounds more soluble for easier elimination. In addition, P450s participate in the metabolism of sex hormones, vitamin D, and bile acids in mammals (1, 2) ecdysones in insects (3) and terpenes in plants (4). P450s also play an important role in microorganisms in the use of various organic compounds as carbon sources and in the production of important natural products such as antibiotics.
P450s range in size from 40 to 50 kDa and contain a single heme group. Oxygen binds to the heme iron where the enzyme catalyzes cleavage of the oxygen O—O bond, leaving behind an iron-linked oxygen atom that provides a potent oxidant. The substrate is held precisely in place such that the correct carbon atom is close to the iron-linked O atom for regio- and stereoselective hydroxylation. In general, those P450s involved in the production of important intermediates such as in steroid metabolism are highly specific. However, many of the drug-metabolizing P450s are not specific and are capable of hydroxylating a variety of diverse and unrelated compounds. Several P450 crystal structures now are known, and it was not too surprising, based on sequence alignments, that the overall fold is maintained in all P450s (Fig. 1). A particularly challenging problem, however, is to understand how P450s adapt to accommodate different substrates given the restriction that the same fold is maintained in all P450s. The first P450 structure (5) presented a puzzle as to how the substrate enters the active site, because the substrate-binding pocket was found to be inaccessible to the outside world. The prevailing view developed since then is that the F and G helices and the loop connecting these helices (Fig. 1) are flexible and undergo an open/close motion, which allows substrates to enter and products to leave. The new P450 2B4 structure by Scott et al. (6), published in this issue of PNAS, provides a dramatic example of the range of motions available to P450s. P450 2B4 is especially important. Formerly known as P450 LM2 (liver microsome 2), P450 2B4 was the first microsomal P450 to be purified and well characterized by Coon et al. (7) and since then has served as a paradigm for detailed biophysical and biochemical investigations. Hence, a good deal is known about this P450, and, with the structure now in hand, many loose ends can to be tied together.
Schematic diagram of P450 BM-3 in the partially open conformation. The B′, F, G, and I helices are labeled. The F and G helices and the F/G loop are highlighted in magenta. These helices slide over the surface of the I helix, which leads to an open/close motion of the access channel leading to the active site.
P450 2B4, like other membrane-bound P450s, presents a special challenge for crystallization. Scott et al. (6) followed the pioneering work from Eric Johnson's laboratory (8) on engineering the P450 to be both soluble and monodisperse, which led to the first mammalian P450 structure, P450 2C5 (9). This involved removal of the N-terminal region that helps to anchor the P450 to the microsomal membrane and mutating key residues on the surface that further improves both solubility and homogeneity. One difference in the P450 2B4 work is that no mutations in the F/G helix region were required. A fortuitous and totally unexpected finding in the new P450 2B4 structure is how two P450 2B4 molecules form a dimer. Molecules related by crystallographic symmetry form a tight dimer, which, in itself, is not so unusual. However, a His residue between the F and G helices in molecule A penetrates into the active site of the crystallographically related molecule B, where it forms a His—Fe bond. The interaction is symmetric, because the His of molecule B also coordinates the iron of molecule A, requiring that the active site adopt a conformation involving the F and G helices far more open than has been observed in previous structures (Fig. 2). One could argue that such an open conformation is an artifact of crystallization. However, Scott et al. (6) show that the same dimer very likely forms in solution. Moreover, forces that hold protein molecules together in a crystal lattice are weak, so the protein cannot adopt a conformation in the crystal that is not energetically accessible in solution. In effect, the P450 2B4 structure has been ”trapped” in a wide-open conformation and nicely illustrates the rather wide range of motions available to P450s. Before the P450 2B4 structure, the most that was known about motion in P450s derived from the bacterial fatty acid monooxygenase P450 BM-3 (10, 11) and, more recently, P450 2CD (12). The substrate-bound and -free structures of P450 BM-3 show that the F and G helices slide as a unit over the L helix, which leads to the open/close motion. It might be anticipated that P450 2B4 will experience the same type of motion.
Spaced filled diagrams of P450 BM-3 and P450 2B4 in the same orientation. The F and G helices are in magenta. The P450 BM-3 model is of the substrate-free structure, which adopts a partially open conformation that allows substrates to enter (10, 17). In P450 2B4, these same elements of structure are positioned very differently, leading to a wide-open active site. This difference illustrates the range of motions possible in P450s while retaining the overall P450-fold.
Such open and close motion, however, does not fully explain how a single P450 can accommodate a variety of different substrates. In addition to closure of the active site, there may also be a more dramatic reshaping of the active site. The best candidates for such changes include the F/G loop and B′ helix. In some P450 structures, the electron density for the F/G loop is visible only in the substrate-bound form (12), suggesting that the F/G loop may be able to shape itself around the substrate. The one clear example of such reshaping is in the thermophilic P450, CYP119. The structure of this P450 has been solved with different types of inhibitors bound to the heme iron (13, 14). A rather dramatic unfolding of the C terminus of the F helix and large movements of the F/G loop were observed, further supporting the view that the F/G loop and associated elements of structure can shape themselves around active site ligands. A P450 involved in steroid metabolism, CYP51, provides yet another example of an open active site (15). In this case, however, the open heme pocket is due more to a break in the I helix that opens up a new access channel to the active site. Although quite speculative, it is intriguing to consider the possibility that once a steroid substrate binds, the I helix ”repairs” itself, resulting in a closed active site. The final example is a chemically modified form of the bacterial P450cam, in which a Cys residue near the active site was modified with a bulky ferrocene. In this case, the B′ helix completely rearranges, resulting in large motions (16). Although not physiologically relevant, the ferrocene work illustrates the flexibility available to a P450 precisely in the region where substrate entry is thought to occur. These various structures of open P450 conformations thus solve the early puzzle of how substrates enter. Large conformational changes due primarily to motions of the F and G helices as well as the F/G loop are responsible. The current P450 2B4 structure very likely represents the extreme end of an open conformation of the drug-metabolizing microsomal P450s. An intriguing aspect of this open conformation is that the connecting segment between the F and G helices in P450 2B4 is ordered and helical. Assuming that the active site closes when a substrate binds, the F/G loop region might move into the active site, which, as in CYP119, may require unfolding of helical segments. We anxiously await a P450 2B4 structure with substrate bound.
Physics in a New Era: An Overview (2001)
Life, our environment, and the materials all around us are complex systems of interacting parts. Often these parts are themselves complex. Take the human body. At a very small scale it is just electrons and nuclei in atoms. Some of these are organized into larger molecules whose interactions produce still larger-scale body functions such as growth, movement, and reasoning. The study of complex systems presents daunting challenges, but rapid progress is now being made, bringing with it enormous benefits. Who can deny, for instance, the desirability of understanding crack formation in structural materials, forecasting catastrophes such as earthquakes, or treating plaque deposition in arteries?
A traditional approach in physics when trying to understand a system made up of many interacting parts is to break the system into smaller parts and study the behavior of each. While this has worked at times, it often fails because the interactions of the parts are far more complicated and important than the behavior of an individual part. An earthquake is a good example. At the microscopic level an earthquake is the breaking of bonds between molecules in a rock as it cracks. Theoretical and computational models of earthquake dynamics have shown that earthquake behavior is characterized by the interacting of many faults (cracks), resulting in avalanches of slippage. These slippages are largely insensitive to the details of the actual molecular bonds.
In the study of complex systems, physicists are expanding the boundaries of their discipline: They are joining forces with biologists to understand life, with geologists to explore Earth and the planets, and with engineers to study crack propagation. Much progress is being made in applying the quantitative methods and modeling techniques of physics to complex systems, and instruments developed by physicists are being used to better measure the behavior of those systems. Advances in computing are responsible for much of the progress. They enable large amounts of data to
be collected, stored, analyzed, and visualized, and they have made possible numerical simulations using very sophisticated models.
In this chapter, the committee describes five areas of complex system research: the nonequilibrium behavior of matter turbulence in fluids, plasmas, and gases high-energy-density systems physics in biology and Earth and its surroundings.
NONEQUILIBRIUM BEHAVIOR OF MATTER
The most successful theory of complex systems is equilibrium statistical mechanics&mdashthe description of the state that many systems reach after waiting a long enough time. About a hundred years ago, the great American theoretical physicist Josiah Willard Gibbs formulated the first general statement on statistical mechanics. Embodied in his approach was the idea that sufficiently chaotic motions at the microscopic scale make the large-scale behavior of the system at, or even near, equilibrium particularly simple. Full-scale nonequilibrium physics, by contrast, is the study of general complex systems&mdashsystems that are drastically changed as they are squeezed, heated, or otherwise moved from their state of repose. In some of these nonequilibrium systems even the notion of temperature is not useful. Although no similarly general theory of nonequilibrium systems exists, recent research has shown that classes of such systems exhibit patterns of common (&ldquouniversal&rdquo) behavior, much as do equilibrium systems. These new theories are again finding simplicity in complexity.
Everyday matter is made of vast numbers of atoms, and its complexity is compounded for materials that are not in thermal equilibrium. The properties of a material depend, then, on its history as well as current conditions. Although materials in thermal equilibrium can display formidable complexity, nonequilibrium systems can behave in ways fundamentally different from equilibrium ones. Nature is filled with nonequilibrium systems: Looking out a window reveals much that is not in thermal equilibrium, including all living things. The glass itself has key properties, such as transparency and strength, that are very different from those of the same material (SiO2) in the crystalline state. Essentially all processes for manufacturing industrial materials involve nonequilibrium phenomena.
A few of the many examples of matter away from equilibrium are described below.
Granular materials are large conglomerations of distinct macroscopic particles. They are everywhere in our daily lives, from cement to cat food to chalk, and are very important industrially in the chemical industry approximately one-half of the products and at least three-quarters of the raw materials are in granular form. Despite their seeming simplicity, granular materials are poorly understood. They have some properties in common with solids, liquids, and gases, yet they can behave differently from these familiar forms of matter. They can support weight but they also flow like liquids&mdashsand on a beach is a good example. A granular material can behave in a way reminiscent of gases in that it allows other gases to pass through it, such as in fluidized beds used industrially for catalysis. The wealth of phenomena that granular materials display cannot easily be understood in terms of the properties of the grains of which they are composed. The presence of friction combines with very-short-range intergrain forces to give rise to their distinctive properties.
Glasses: Nonequilibrium Solids
When a liquid is cooled slowly enough, it crystallizes. However, most liquids, if cooled quickly, will form a glass. The glass phase has a disordered structure superficially similar to a liquid, but unlike the liquid phase, its molecules are frozen in place. Glasses have important properties that can be quite different from those of a crystal made of the same molecules&mdashproperties, for example, that are vital to their use as optical fibers. Understanding the relationship between the properties of a glass, the atoms of which it is composed, and the means by which it was prepared is key for the development and exploitation of glasses with useful properties. Understanding the nature of the glass transition (whether it is a true phase transition or a mere slowing down) is of fundamental importance and will yield insight into other disordered systems such as neural networks.
Failure Modes: Fracture and Crack Propagation
Understanding the initiation and propagation of cracks is extremely important in contexts ranging from aerospace and architecture to earthquakes. The problem is very challenging because cracking involves the interplay of many length scales. A crack forms in response to a large-scale strain, and yet the crack initiation event itself occurs at very short distances.
In many crystals the stress needed to initiate a crack drops substantially if there is a single individual defect in the crystal lattice. This is one example of the general tendency of nonequilibrium systems to focus energy input from large scales onto small, sometimes even atomic scales. Other examples include turbulence, adhesion, and cavitation. Depending on material properties that are not well understood, a small crack can grow to a long crack, or it can blunt and stop growing. Recent advances in computation and visualization techniques are now providing important new insight into this interplay through numerical simulation.
Foams and Emulsions
Foams and emulsions are mixtures of two different phases, often with a surfactant (for instance, a layer of soap) at the boundary of the phases. A foam can be thought of as a network of bubbles: It is mostly air and contains a relatively small amount of liquid. Foams are used in many applications, from firefighting to cooking to shaving to insulating. The properties of foams are quite different from those of either air or the liquid in fact, foams are often stiffer than any of their constituents. Emulsions are more nearly equal combinations of two immiscible phases such as oil and water. For example, mayonnaise is an emulsion. The properties of mayonnaise are quite different from those of either the oil or the water of which it is almost entirely composed. With the widespread use of foams and emulsions, it is important to design such materials so as to optimize them for particular applications. Further progress will require a better understanding of the relationship between the properties of foams and emulsions and the properties of their constituents and more knowledge of other factors that determine the behavior of these complex materials.
Colloids are formed when small particles are suspended in a liquid. The particles do not dissolve but are small enough for thermal motions to keep them suspended. Colloids are common and important in our lives: paint, milk, and ink are just three examples. In addition, colloids are pervasive in biological systems blood, for instance, is a colloid.
A colloid combines the ease of using a liquid with the special properties of the solid suspended in it. Although it is often desirable to have a large number of particles in the suspension, particles in highly concentrated suspensions have a tendency to agglomerate and settle out of the suspension.
The interactions between the particles are complex because the motions of the particles and the fluid are coupled. When the colloidal particles themselves are very small, quantum effects become crucial in determining their properties, as in quantum dots or nanocrystals.
New x-ray- and neutron-scattering facilities and techniques are enabling the study of the relation between structure and properties of very complex systems on very small scales. Sophisticated light-scattering techniques enable new insight into the dynamics of flow and fluctuations at micron distances. Improved computational capabilities enable not just theoretical modeling of unprecedented faithfulness, complexity, and size, but also the analysis of images made by video microscopy, yielding new insight into the motion and interactions of colloidal particles.
The combination of advanced experimental probes, powerful computational resources, and new theoretical concepts is leading to a new level of understanding of nonequilibrium materials. This progress will continue and is enabling the development and design of new materials with exotic and useful properties.
TURBULENCE IN FLUIDS, PLASMAS, AND GASES
The intricate swirls of the flow of a stream or the bubbling surface of a pan of boiling water are familiar examples of turbulence, a challenging problem in modern physics. Despite their familiarity, many aspects of these phenomena remain unexplained. Turbulence is not limited to such everyday examples it is present in many terrestrial and extraterrestrial systems, from the disks of matter spiraling into black holes to flow around turbine components and heart valves. Turbulence can occur in all kinds of matter, for example, molten rock, liquids, plasmas, and gases.
The understanding and control of turbulence can also have significant economic impact. For instance, the elimination or reduction of turbulence in flow around cars reduces fuel consumption, and the elimination of turbulence from fusion plasmas reduces the cost of future reactors (see sidebar &ldquoSuppressing Turbulence to Improve Fusion&rdquo ). Perhaps the most dramatic phenomena involving turbulence are those with catastrophic dynamics, such as tornadoes in the atmosphere, substorms in the magnetosphere, and solar flares in the solar corona.
Although turbulence is everywhere in nature, it is hard to quantify and predict. There are four important reasons for this. First, turbulence is a complex interplay between order and chaos. For example, coherent eddies in a turbulent stream often persist for surprisingly long times. Second, turbu-
SUPPRESSING TURBULENCE TO IMPROVE FUSION
To produce a practical fusion energy source on Earth, plasmas with temperatures hotter than the core of the Sun must be confined efficiently in a magnetic trap. In the 1990s, new methods of plugging the turbulent holes in these magnetic cages were discovered. The figure below shows two computer simulations. The leftmost simulation shows turbulent pressure fluctuations in a cross section of a tokamak&mdasha popular type of magnetic trap. These turbulent eddies are like small whirlpools, carrying high-temperature plasma from the center toward the lowertemperature edge. The rightmost simulation shows that adding sheared rotational flows can suppress the turbulence by stretching and shearing apart the eddies, analogous to keeping honey on a fork by twirling the fork.
Several experimental fusion devices have demonstrated turbulence suppression by sheared rotational flows. This graph of data from the Tokamak Fusion Test Reactor at the Princeton Plasma Physics Laboratory, which achieved 10 MW of fusion power in 1994, shows how suppressing the turbulence can significantly boost the central plasma pressure and thus the fusion reaction rate.
lence often takes place over a vast range of distances. Third, the traditional mathematical techniques of physics are inappropriate for the highly nonlinear equations of turbulence. And fourth, turbulence is hard to measure experimentally.
Despite these formidable difficulties, research over the past two decades has made much progress on all four fronts: For example, ordered structures have been identified within chaotic turbulent systems. Such structures are often very specific to a given system. Turbulence in the solar corona, for example, appears to form extremely thin sheets of electric current, which may heat the solar corona. In fusion devices, plasma turbulence has been found theoretically and experimentally to generate large-scale sheared flows, which suppress turbulence and decrease thermal losses from the plasma. The patchy, intermittent nature of turbulence so evident in flowing streams is being understood in terms of fractal models that quantify patchiness. The universe itself is a turbulent medium much of modern cosmological simulation seeks to understand how the observed structure arises from fluctuations in the early universe.
The recognition that turbulent activity takes place over a range of vastly different length scales is ancient. Leonardo da Vinci's drawings of turbulence in a stream show small eddies within big ones. In theories of fluid turbulence, the passing of energy from large to small scales explains how small scales arise when turbulence is excited by large-scale stirring, like a teaspoon stirring coffee in a cup. Small-scale activity can also arise from the formation of isolated singular structures: shock waves, vortex sheets, or current sheets.
The difficulty of measuring turbulence is due partly to the enormous range of length scales and the need to make measurements without disturbing the system. However, the structure and dynamics of many turbulent systems have been measured with greatly improved accuracy using new diagnostic tools. Light scattering is being used to probe turbulence in gases and liquids. Plasma turbulence is being measured by scattering microwaves off the turbulence and by measuring light emission from probing particles. Soft x-ray imaging of the Sun by satellites is revealing detailed structure in the corona. Measurements of the seismic oscillations of the Sun are probing its turbulent convection zone. Advances in computing hardware and software have made it possible to acquire, process, and visualize vast turbulence data sets (see sidebar &ldquoEarth's Dynamo&rdquo ).
When stars explode, a burst of light suddenly appears in the sky. In a matter of minutes, the star has collapsed and released huge quantities of energy. These supernova explosions are examples of high-energy-density phenomena, an immense quantity of energy confined to a small space. Our
Earth's magnetic field reverses direction on average every 200,000 years. The series of snapshots of a reversal shown below is taken from a computer simulation in which such reversals occur spontaneously. The snapshots are 300 years apart during one of two reversals in the 300,000 years simulated. In the top four pictures the red denotes outward field and the blue inward field on Earth's surface. The bottom four show the same thing beneath the crust on top of the liquid core. Through computer simulation this complex phenomenon is being understood and quantified.
Motions of Earth's liquid core drive electrical currents that form the magnetic field. This field pokes through Earth's surface and orients compasses. The picture at the right shows the magnetic field lines in a computer simulation of this process. Where the line is blue it is pointing inward and where it is gold it is pointing outward. The picture is two Earth diameters across.
experience with high energy densities has been limited until recently, it was mainly the concern of astrophysicists and nuclear weapons experts. However, the advent of high-intensity lasers, pulsed power sources, and intense particle beams has opened up this area of physics to a larger community.
There are several major areas of research in high-energy-density physics. First, high-energy-density regimes relevant to astrophysics and cosmology are being reproduced and studied in the laboratory. Second, researchers are inventing new sources of x rays, gamma rays, and high-energy particles for applications in industry and medicine. Third, an understanding of high-energy-density physics may provide a route to economical fusion energy. And it is worth noting that the very highest energy densities in the universe since 1 µs after the Big Bang are being created in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory.
Since 1980, the intensity of lasers has been increased by a factor of 10,000. The interaction of such intense lasers with matter produces highly relativistic electrons and a host of nonlinear plasma effects. For example, a laser beam can carve out a channel in a plasma, keeping itself focused over long distances. Other nonlinear effects are being harnessed to produce novel accelerators and radiation sources. These high-intensity lasers are also used to produce shocks and hydrodynamic instabilities similar to those expected in supernovae and other astrophysical situations.
High-energy-density physics often requires large facilities. The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory will study the compression of tiny pellets containing fusion fuel (deuterium and tritium) with a high-energy laser. These experiments are important for fusion energy and astrophysics. During compression, which lasts a few billionths of a second, the properties of highly compressed hydrogen are measured. The information gained is directly relevant to the behavior of the hydrogen in the center of stars. The NIF will also study the interaction of intense x rays and matter and the subsequent shock-wave generation. These topics are also being studied at the Sandia National Laboratories using x rays produced by dense plasma discharges. A commercial source of fusion energy by the compression of pellets will require developing an efficient driver to replace the NIF laser.
PHYSICS IN BIOLOGY
The intersection between physics and biology has blossomed in recent years. In many areas of biology, function depends so directly on physical
processes that these systems may be fruitfully approached from a physical viewpoint. For example, the brain sends electrical signals (i.e., nerve impulses), and these signaling systems are properly viewed as a standard physical system (that happens to be alive). In this section, the committee describes several important areas at the interface of physics and biology and discusses some future directions.
Modern structural biology, that branch of biophysics responsible for determining the structure of biological molecules down to the position of the individual constituent atoms, had its origins in the work on x-ray crystallography by von Laue (Nobel Prize in 1914) and W.H. Bragg and W.L. Bragg (Nobel Prize in 1915). This was followed by the discovery of nuclear magnetic resonance by Rabi and by Purcell and Bloch, resulting in Nobel Prizes in 1944 and 1952. With x-ray crystallography and modern computers, the structure of any molecule that forms crystals can be determined down to the position of all of the individual atoms. Through technological advances depending on microelectronics and computers, it is now possible to determine the detailed structure of many protein molecules in their natural state. The structures of hundreds of proteins are now known, and biophysicists are learning how these proteins do their jobs. For many enzymes, the structural rearrangements and specific chemical interactions that underlie the enzymatic activity are now known, and how molecular motors move is becoming understood.
The main challenges that remain are determining how information is transmitted from one region of a molecule to another&mdashfrom a binding site on one surface of a protein to an enzymatic active site on the opposite surface, for example&mdashand how the specific sequence of amino acids determines the final structure of the protein. Proteins fold consistently and rapidly, but how they search out their proper place in a vast space of possible structures is not understood. Understanding how proteins assume their correct shape remains a central problem in biophysics (see sidebar &ldquoProtein Folding&rdquo ).
Cells depend on molecular motors for a variety of functions, such as cell division, material transport, and motion of all or part of the organism (like the beating of the heart or the movement of a limb). At mid-century it was learned that muscles use specific proteins to generate force, but the
Proteins are giant molecules that play key roles as diverse as photosynthetic centers in green plants, light receptors in the eye, oxygen transporters in blood, motor molecules in muscle, ion valves in nerve conduction, and chemical catalysts (enzymes).
Their architecture is derived from information in the genes, but this information gives us the rather disappointing one-dimensional chain molecule seen at left in the figure. Only when the chain folds into its compact &ldquonative state&rdquo does its inherent structural motif appear, which enables guessing the protein's function or targeting it with a drug.
How can this folding process be understood? Experimentation gives only hints, so computation must play a key role. In the computer it is possible to calculate the trajectories of the constituent atoms of the protein molecule and thus simulate the folding process. This computation of the forces acting on an atom due to other protein atoms and to the atoms in the surrounding water must be repeated many times on the fast time scale of the twinkling atomic motions, which is 10 &minus15 seconds (1 fs). Since even the fastest-folding proteins take close to 100 µs (10 &minus4 s) to fold, all the forces must be computed as many as 10 11 times. The computational power necessary to do this is enormous. Even for a small protein, a single calculation of all the forces takes 10 10 computer operations, so to do this 10 11 times in, say, 10 days requires on the order of 10 15 computer operations per second (1 petaflop). This is 50 times the entire computational power of the world's population of supercomputers.
Even though the power of currently delivered supercomputers is little more than 1 teraflop (10 12 operations per second), computers with petaflop power for tasks such as protein folding are anticipated in the near future by exploiting the latest silicon technology together with new ideas in computer design. For example, IBM has a project named Blue Gene, which is dedicated to biological objectives such as protein folding and aims at building a petaflop computer within 5 years.
multitude of motor types now known was only dimly suspected. Large families of motors have now been characterized, and the sequence of steps through which they exert force has been largely elucidated. In recent years researchers have measured the forces generated by a single motor cycle and the size of the elementary mechanical step. This research has required the invention of new kinds of equipment, such as optical tweezers, that permit precise manipulation of single bacterial cells as well as novel techniques for measuring movements that are too small to see with an ordinary microscope.
The challenge for the early part of the 21st century will be to develop a theory that relates force and motion generation with the detailed structure of the various types of motor proteins. The end result will be an understanding of how the heart pumps blood and how muscles contract.
Photosynthesis in plants and vision in animals are examples of photobiology, in which the energy associated with the absorption of light can be stored for later use (photosynthesis) or turned into a signal that can be sent to other places (photoreception). The parts of molecules responsible for absorbing light of particular colors, called chromophors, have long been known, as have the proteins responsible for the succeeding steps in the use of the absorbed energy. In recent years biophysicists have identified a sequence of well-defined stages these molecules pass through in using the light energy, some lasting only a billionth of a second. Biophysicists and chemists have learned much about the structure of these proteins, but not yet at the level of the positions of all atoms, except for one specialized bacterial protein, the bacterial photosynthetic reaction center (the work of Deisenhofer, Huber, and Michel was rewarded with the Nobel Prize in 1998).
In photobiology, the challenge is to determine the position of all atoms in the proteins that process energy from absorbed light and to learn how this energy is used by the protein. This will involve explaining, in structural terms, the identity and function role of all of the intermediate stages molecules pass through as they do their jobs.
The surface membranes that define the boundaries of all cells also prevent sodium, potassium, and calcium ions, essential for the life pro-
cesses of a cell, from entering or leaving the cell. To allow these ions through in a controlled manner, a cell's membrane is provided with specialized proteins known as ion channels. These channels are essential to the function of all cell types, but are particularly important for excitable tissues such as heart, muscle, and brain. All cells use metabolic energy (derived from the food we eat) to maintain sodium, potassium, and calcium ions at different concentrations inside and outside the cell. The special proteins that maintain these ionic concentration differences are called pumps. As a result of pump action, the potassium ion concentration is high inside cells and the sodium ion concentration is low: Potassium ions are inside the cell and sodium ions are outside. Various ion channels separately regulate the inflow and outflow of the different kinds of ions (letting sodium come in or permitting potassium to leave the cell). These channels working in concert produce electrical signals, such as the nerve impulses used for transmitting information rapidly from one place in the brain to another or from the brain to muscles.
Ion channels exhibit two essential properties, gating and permeation. &ldquoGating&rdquo refers to the process by which the channels control the flow of ions across the cell membrane. &ldquoPermeation&rdquo refers to the physical mechanisms involved in the movement of ions through a submicroscopic trough in the middle of the channel protein: A channel opens its gate to let ions move in and out of the cell. Permeation is a passive process (that is, ions move according to their concentration and voltage gradients) but a rather complicated one in that channels exhibit great specificity for the ions permitted to permeate. Some channel types will allow the passage of sodium ions but exclude potassium and calcium ions, whereas others allow potassium or calcium ions to pass. The ability of an ion channel to determine the species of ion that can pass through is known as &ldquoselectivity.&rdquo
At mid-century, it was realized that specific ion fluxes were responsible for the electrical activity of the nervous system. Soon, Hodgkin and Huxley provided a quantitative theory that described the ionic currents responsible for the nerve impulse (in 1963 they received the Nobel Prize for this work), but the nature of the hypothetical channels responsible remained mysterious, as did the physical basis for gating and selectivity.
In the past half-century, ion channels have progressed from being hypothetical entities to well-described families of proteins. The currents that flow through single copies of these proteins are routinely recorded (in 1991, Neher and Sakmann received the Nobel Prize for this achievement), and such studies of individual molecular properties, combined with modern molecular biological methods, have provided a wealth of information about
which parts of the protein are responsible for which properties. Very recently, the first crystal structure of the potassium channel was solved, with its precise structure having been determined down to the positions of its constituent atoms.
The physics underlying gating, permeation, and selectivity is well understood, and established theories are available for parts of these processes. The main challenge for channel biophysicists now is to connect the physical structure of ion channels to their function. A specific physical theory is needed that explains how, given the structure of the protein, selectivity, permeation, and gating arise. Because the channels are rather complicated, providing a complete explanation of how channels work will be difficult and must await more detailed information about the structure of more types of channels. This is the goal of channel biophysics in the early part of the 21 st century. The end result will be an understanding, in atomic detail, of how neurons produce the nerve impulses that carry information in our own very complex computer, the brain.
Theoretical Biology and Bioinformatics
Theoretical approaches developed in physics&mdashfor example, those describing abrupt changes in states of matter (phase transitions)&mdashare helping to illuminate the workings of biological systems. Three areas of biology in which this approach of physics is being applied fruitfully to improve understanding are computation by the brain, complex systems in which patterns arise from the cooperation of many small component parts, and bioinformatics. Although neurobiologists have made dramatic progress in understanding the organization of the brain and its function, further advances will likely require a more central role for theory. To date, most progress has been made by guessing, based on experiments, how parts of the brain work. But as the more complicated and abstract functions of the brain are studied, the chances of proceeding successfully without more theory diminish rapidly. Because physics offers examples of advanced and successful uses of theory, the methods used in physics may be most appropriate for understanding brain structure and function.
The life of all cells depends on interactions among the genes and enzymes that form vastly complicated genetic and biochemical networks. As biological knowledge increases, particularly as the entire set of genes that constitutes the blueprint for organisms&mdashincluding humans&mdashbecomes known, the most important problems will involve treating these complex networks. Modern biology has provided means to measure experimentally
the action of many&mdashin some cases all&mdashof an organism's genes in response to environmental changes, and new theories will be required for interpreting such experiments.
One of biology's most fundamental problems is explaining how organisms develop: How do the 10 5 genes that specify a human generate the correct organs with the right shape and properties? This question of pattern specification by human genes has strong similarities with many questions asked by physicists, and physics offers good models for approaching it and similar questions in biological systems.
The information explosion in biology produces vastly complex problems of data management and the extraction of useful information from giant databases. Methods from statistical physics have been helpful in approaching these problems and should become even more important as the volume of data increases.
EARTH AND ITS SURROUNDINGS
As the impact of humans on the environment increases, there is a greater need to understand Earth, its oceans and atmosphere, and the space around it. The economic and societal consequences of natural disasters and environmental change have been greatly magnified by the complexity of modern life. For this reason, the quest for predictive capability in environmental science has become a huge multidisciplinary scientific effort, in which physics plays an important role. In this section, the committee discusses three important areas&mdashearthquake dynamics, coastal upwellings, and magnetic substorms&mdashwhere new ideas are leading to dramatic progress.
The 1994 Northridge earthquake (magnitude 6.8) in southern California was the single most expensive natural disaster in the history of the United States. Losses of $25 billion are estimated for damaged structures and their contents alone. Earthquakes are becoming more expensive with each year, not because they are happening more often but because of the increased population density in our cities. An earthquake on the Wasatch fault in Utah would have had little effect on the agrarian society in 1901 in 2001, a bustling Salt Lake City sprawls along the same fault scarp that raised the mountains that overlook it.
Earthquakes happen when the stress on a fault becomes large enough to cause it to slip. Releasing the stress on one fault often increases the stress on
another fault, causing it to slip and creating an avalanche effect. Recent studies of computer models of coupled fault systems under stress have yielded important insights. A large class of these models produces a behavior called self-organized criticality, in which the model's fault system sits close to a point of only marginal stability. The stress is then released in avalanche events. A remarkably large number of models yield the same statistical behavior despite differences in microscopic dynamics. These models correctly predict that the frequency of earthquakes in an energy band is proportional to the energy raised to a power. This, in turn, allows the accurate prediction of seismic hazard probabilities.
Studies of natural earthquakes provide the opportunity to observe active faults and to understand how they work. The Navy's NAVSTAR GPS has provided a relatively inexpensive and precise method for measuring the movement of the ground around active faults. An exciting new development in recent years has been the use of synthetic aperture radar (SAR) to produce images (SAR interferograms) of the displacements associated with earthquake rupture.
The largest earthquakes on Earth occur within the boundary zones that separate the global system of moving tectonic plates. On most continents, plate boundaries are broad zones where Earth is deformed. Several active faults of varying orientations typically absorb the motion of any given region, raising mountains and opening basins as they move. Modeling suggests that the lower continental crust flows, slowly deforming its shape, while sandwiched between a brittle, earthquake-prone upper crust and a stiff, yet still moving, mantle. This ductile flow of the lower crust is thought to diffuse stress from the mantle portion of the tectonic plate to the active faults of the upper crust. The physical factors that govern this stress diffusion are still poorly understood (see sidebar &ldquoPinatubo and the Challenge of Eruption Prediction&rdquo ).
Environmental phenomena involve the dynamic interaction between biological, chemical, and physical systems. The coastal upwellings off the California coast provide a powerful example of this interaction. Upwellings are cold jets of nutrient-rich water that rise to the surface at western-facing coasts. These sites yield a disproportionately large amount of plankton and deposit large amounts of organic carbon into the sediments. Instant satellite snapshots of quantities such as the ocean temperature and the chlorophyll content (indicating biological species) provide important global data for
PINATUBO AND THE CHALLENGE OF ERUPTION PREDICTION
Volcanoes consist of a complex fluid (silicate melt, crystals, and expansive volatile phases), a fractured &ldquopressure vessel&rdquo of crustal rock, and a conduit leading to a vent at Earth's surface. Historically, eruption forecasting relied heavily on empirical recognition of familiar precursors. Sometimes, precursory changes escalate exponentially as the pressure vessel fails and magma rises. But more often than not, progress toward an eruption is halting, over months or years, and so variable that forecast windows are wide and confidence is low.
In unpopulated areas, volcanologists can refine forecasts as often as they like without the need to be correct all the time. But the fertile land around volcanoes draws large populations, and most episodes of volcanic unrest challenge volcanologists to be specific and right the first time. Evacuations are costly and unpopular, and false alarms breed cynicism that may preclude a second evacuation. In 1991, the need to be right at the Pinatubo volcano in the Philippines was intense, yet the forecast was a product of hastily gathered, sparse data, various working hypotheses, and crossed fingers. Fortunately for evacuees and volcanologists alike, the volcano behaved as forecast.
As populations and expectations rise, uncertainties must be trimmed. Today's eruption forecasts use pattern recognition and logical, sometimes semiquantitative analysis of the processes that underlie the unrest, growing more precise and accurate. Tomorrow's sensors may reveal whole new parameters, time scales, and spatial scales of volcanic behavior, and tomorrow's tools for pattern recognition will mine a database of episodes of unrest that is now under development. And, increasingly, forecasts will use numerical modeling to narrow the explanations and projections of unrest. Volcanoes are one of nature's many complex systems, defying us to integrate physics, chemistry, and geology into ever-more-precise forecasts. Millions of lives and billions of dollars are at stake.
models that have to describe the physical (turbulent) mixing of the upwelling water with the coastal surface water, the evolution of the nutrients and other chemicals, and the population dynamics of the various biological species. Since the biology is too detailed to model entirely, models must abstract the important features. The rates of many of the processes are essentially determined by the mixing and spreading of the jet, which involves detailed fluid mechanics. Research groups have begun to understand the yearly variations and the sensitivities of these coastal ecosystems.
The solar wind is a stream of hot plasma originating in the fiery corona of the Sun and blowing out past the planets. This wind stretches Earth's magnetic field into a long tail pointing away from the Sun. Parts of the magnetic field in the tail change direction sporadically, connect to other parts, and spring back toward Earth. This energizes the electrons and produces spectacular auroras (the Northern Lights is one example). These magnetic substorms, as they are called, can severely damage communication satellites. Measurements of the fields and particles at many places in the magnetosphere have begun to reveal the intricate dynamics of a substorm. The process at the heart of a substorm, sporadic change in Earth's magnetic field, is a long-standing problem in plasma physics. Recent advances in understanding this phenomenon have come from laboratory experiments, in situ observation, computer simulation, and theory. It appears that turbulent motion of the electrons facilitates the rapid changes in the magnetic field&mdashbut just how this occurs is not yet fully understood.
Subaru-HiCIAO spots young stars surreptitiously devouring their birth clouds
Figure 1: Circumstellar structures revealed by Subaru-HiCIAO. The scale bars are shown in AUs (astronomical units). One AU is the average distance from the sun to the earth. The gas and dust surrounding baby stars (their food) are significantly more extended than our solar system. Here we show the first observations of such complex structures around active young stars. Credit: Science Advances, H. B. Liu.
An international team led by researchers at the Academia Sinica Institute of Astronomy and Astrophysics (ASIAA) has used a new infrared imaging technique to reveal dramatic moments in star and planet formation. These seem to occur when surrounding material falls toward very active baby stars, which then feed voraciously on it even as they remain hidden inside their birth clouds. The team used the HiCIAO (High Contrast Instrument for the Subaru Next-Generation Adaptive Optics) camera on the Subaru 8-meter Telescope in Hawaii to observe a set of newborn stars. The results of their work shed new light on our understanding of how stars and planets are born (Figure 1).
The Process of Star Birth
Stars are born when giant clouds of dust and gas collapse under the pull of their own gravity. Planets are believed to be born at nearly the same time as their stars in the same disk of material. However, there are still a number of mysteries about the detailed physical processes that occur as stars and planets form (Figure 2).
The giant collections of dust and gas where stars form are called "molecular clouds" because they are largely made up of molecules of hydrogen and other gases. Over time, gravity in the densest regions of these clouds gathers in the surrounding gas and dust, via a process called "accretion". It is often assumed that this process is smooth and continuous (Figure 2). However, this steady infall explains only a small fraction of the final mass of each star that is born in the cloud. Astronomers are still working to understand when and how the remaining material is gathered in during the process of star and planet birth.
A few stars are known to be associated with a sudden and violent "feeding" frenzy from inside their stellar nursery. When they gluttonize on the surrounding material, their visible light increases very suddenly and dramatically, by a factor of about a hundred. These sudden flareups in brightness are called "FU Orionis outbursts" because they were first discovered toward the star FU Orionis.
Not many stars are found to be associated with such outbursts—only a dozen out of thousands. However, astronomers speculate that all baby stars may experience such outbursts as part of their growth. The reason we only see FU Ori outbursts toward a few newborn stars is simply because they are relatively quiet most of the time.
One key question about this mysterious facet of starbirth is "What are the detailed physical mechanisms of these outbursts?" The answer lies in the region surrounding the star. Astronomers know the optical outbursts are associated with a disk of material close to the star, called the "accretion disk". It becomes significantly brighter when the disk gets heated up to temperatures similar to those of lava flows here on Earth (around 700 to 1200 C or 1292 to 2182 F) like the one flowing from Kilauea volcano area in the island of Hawaii. Several processes have been proposed as triggers for such outbursts and astronomers have been investigating them over the past few decades.
Finding a Mechanism for FU Ori Outbursts
An international team lead by Drs. Hauyu Baobab Liu and Hiro Takami, two researchers at ASIAA, used a novel imaging technique available at the Subaru Telescope to tackle this issue. The technique – imaging polarimetry with coronagraphy – has tremendous advantages for imaging the environments in the disks. In particular, its high angular resolution and sensitivity allow astronomers to "see" the light from the disk more easily. How does this work?Figure 3: Images made from computer simulations based on one theory for violent growth of a star. (Left) Simulations of the motion of circumstellar materials falling onto a baby star. (Middle and right). Models of how we would observe the structure in scattered light, seen from two different angles. Credit: Science Advances, H. B. Liu
Circumstellar material is a mixture of gas and dust. The amount of dust is significantly smaller than the amount of gas in the cloud, so it has little effect on the motion of the material. However, dust particles scatter (reflect) light from the central star, illuminating all the surrounding material. The HiCIAO camera mounted on the Subaru 8.2-meter telescope, one of the largest optical and near-infrared (NIR) telescopes in the world, is well-suited to observing this dim circumstellar light. It successfully allowed the team to observe four stars experiencing FU Ori outbursts.
Details of Four FU Ori Outbursts
The team's target stars are located 1,500-3,500 light-years from our solar system. The images of these outbursting newborns were surprising and fascinating, and nothing like anything previously observed around young stars (Figure 1). Three have unusual tails. One shows an "arm", a feature created by the motion of material around the star. Another shows odd spiky features, which may result from an optical outburst blowing away circumstellar gas and dust. None of them match the picture of steady growth shown in Figure 2. Rather, they show a messy and chaotic environment, much like a human baby eating food.
To understand the structures observed around these newborn stars, theorists on the team extensively studied one of several mechanisms proposed to explain FU Ori outbursts. It suggests that gravity in circumstellar gas and dust clouds creates complicated structures that look like cream stirred into coffee (Figure 3, left). These oddly shaped collections of material fall onto the star at irregular intervals. The team also conducted further computer simulations for scattered light from the outburst. Although more simulations are required to match the simulations to the observed images, these images show that this is a promising explanation for the nature of FU Ori outbursts (Figure 3).
Studying these structures may also reveal how some planetary systems are born. Astronomers know some exoplanets (planets around other stars) are found extremely far away from their central stars. Sometimes they orbit more than a thousand times the distance between the Sun and Earth, and significantly larger than the orbit of Neptune (which is about 30 times the distance between the Sun and Earth). These distances are also much larger than orbits explained by standard theories of planet formation. Simulations of complicated circumstellar structures like the ones seen in the HiCIAO views also predict that some dense clumps in the material may become gas giant planets. This would naturally explain the presence of exoplanets with such large orbits.
In spite of these exciting new results, there is a still great deal more work to do to understand the mechanisms of star and planet birth. More detailed comparisons between observation and theory are needed. Further observations, particularly with the Atacama Large Millimeter/Submillimeter Array, will take our gaze more deeply into circumstellar gas and dust clouds. The array allows observations of the surrounding dust and gas with unprecedented angular resolution and sensitivity. Astronomers are also planning to construct telescopes significantly larger than Subaru in the coming decades – including the Thirty Meter Telescope (TMT) and the European Extremely Large Telescope. These should allow detailed studies of regions very close to newborn stars.
Continuing transmission of CPE in Italy and a number of other European countries remains a cause for grave concern. This position paper provides information regarding the background to the problem and a brief overview of current guidance documents, with particular respect to the role of active surveillance. While it is clear that active surveillance for carriage of CPE remains an important aspect of the overall “bundle approach” to containment, evidence is scanty regarding the precise role of screening, types of patients and clinical units to be screened, and method of screening including the role for REU-CMA. While we have attempted to provide some pragmatic advice on active surveillance in a high burden country, we also wish to highlight the importance of further research into this topic.