Information

Why is the null hypothesis of trait evolution Brownian motion?

Why is the null hypothesis of trait evolution Brownian motion?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Many models of continuous trait evolution assume that traits evolve according to Brownian motion. What is the biological or physical basis for this choice?

I realize there are models that do not assume Brownian motion, but what I am interested in asking is why the null model is so often chosen to be Brownian motion.


I think that the simple answer to this question is that the present comparative methodology was largely established by Felsenstein 1985, American Naturalist. For mathematical convenience, he suggested Brownian motion as a null hypothesis, because "… the variance of the distribution of change of a branch is proportional to the length of time of the branch… ", and then "… it is easy to see that the differences between pairs of tips… must be independent." Also: "… after one unit of time, the contrast [between a pair of tips] has expectation 0 and [easily defined] variance… "

He explicitly discusses whether Brownian motion is a reasonable model in the section "What if we lack an acceptable statistical model of character change?"

I would suggest reading that paper in more detail if you are interested in the details of Brownian motion applied to phylogenies.

A recent historical perspective on this influential paper can be found here.

For a more extensive bibliography/more details you can see here.


Just came across a chapter by Felsenstein on phylogenetic inference with quantitative characters. In it, he states this biological justification for using Brownian motion:

A quantitative trait that has genetic variation controlled by a single locus will change as the gene frequencies at the locus undergo genetic drift… Brownian motion is a reasonable approximation to change of a quantitative character by genetic drift, provided that… [additive genetic variance]… remains approximately constant.

Felsenstein also mentions that Cavalli-Sforza and Edwards (1967) consider that varying selection at a locus can be approximated by Brownian motion.

So I suppose that's another slant on why Brownian motion is used.


Why is the null hypothesis of trait evolution Brownian motion? - Biology

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.


Statistical evidence

Although frequency data and a well-supported empirical theory can provide a basis for assigning prior probabilities, the principle of indifference cannot. (p. 27)

The first chapter is fairly well written and presents a reasonable picture on the different perspectives (Bayesian, likelihood, frequentist) used for hypothesis testing and model choice, although it misses references to the relevant literature (for instance, there is no reference to Berger and Wolpert [3] when the likelihood principle is discussed). Akaike's information criterion is promoted as the method of choice, but this is a well-established model choice tool that can be accepted at a general level. Paradoxically, I find the introduction to Bayesian principles to be overly long (as is often the case in cognitive sciences) since, as the author acknowledges from the start, 'Bayes' Theorem is a result in mathematics [that] is derivable from the axioms of probability theory' (p. 8). This is especially blatant when considering that Sober takes a very long while to introduce prior densities on parameter spaces, a reluctance that is consistent with the parameter-free preferences of the book. The (standard) criticisms he addresses to the choice of those priors (which should be 'empirically well-grounded' [pp. 26 and 27], as also pointed out in the previous quote) periodically resurface throughout the book, but are far from convincing, as they mistake the role of the prior distributions as reference measures [4] for expressions of truth. The extended criticism of the foundations of Neyman - Pearson testing procedures is thorough and could benefit genuine statistician readers, as well as philosophers and biologists.

The Akaike framework makes plausible a mixed philosophy: instrumentalism for models, realism for fítted models. (p. 98)

At a general level, I have two statistical difficulties with this chapter. First, while Sober introduces the Akaike information criterion as a natural penalty for comparing models of different levels of difficulty, I fear that the notions of statistical parsimony and dimension penalty are mentioned much too late in the chapter. Using a likelihood ratio for embedded models is, for instance, meaningless unless a correction for the difference in dimensions is introduced. Second, the models used in this chapter and throughout the book are singularly missing variable parameters, which makes all tests appear as a comparison of point null hypotheses. The presence of nuisance or interest parameters should be better acknowledged.

Bayesianism is a substantive epistemology, not a truism. (p. 107)

At a specific level, it is not possible to address all the minor points with which I disagree, but I think Sober is misrepresenting the Bayesian approach to model choice and that he is missing the central role played by the Bayes factor in this approach. The fact that the Bayes factor is an automated Ockham's razor with the proper penalty for differences in dimension [5] is altogether missed. In particular, Sober reproduces Templeton's error [6]. Indeed, he states that 'the simpler model cannot have the higher prior probability, a point that Popper (1959) emphasised' (p. 83). And Sober further insists that there is no reason for thinking that

is true (p. 84). (This commonsense constraint obviously does not make sense for continuous state spaces, since comparing models requires working with foreign dominating measures.) Even though the likelihood ratio is a central quantity in the chapters that follow, I am also reluctant to agree with introducing a specific category for likelihoodists (sic!), since, besides a Bayesian incorporation, a calibrated likelihood leads either to a frequentist Neyman - Pearson test or to a predictive tool, such as Akaike's (which is also frequentist, in that it is an unbiased estimator). In addition, the defence of the Akaike criterion is overdone, in particular the discussion about the unbiasedness of Akaike's information criterion (AIC), which confuses the fact that the averaged log-likelihood is an unbiased estimator of the Kullback - Leibler divergence with the issue that the AIC involves a plug-in estimator of the parameters, as shown on pages 85 and 101. The arguments for AIC versus Bayesian information criterion (BIC) are weak, from BIC being biased (correct but irrelevant) and Bayesian (incorrect), to the fact that it contradicts the above fallacious ordering of simple versus complex models. A discussion of the encompassing framework of George and Foster [7] would have been welcomed at this stage.

While the above points are due criticisms (from a statistician), the fact remains that this chapter is an exceptionally good and lucid discussion of the philosophy of testing and that it could well serve as the basis of a graduate reading seminar. I thus recommend it to all statistician readers and teachers.


The Evolution of the Future

Evolution, being an unguided process, would seem the last thing one could predict. That hasn’t stopped some evolutionists from speculating what an evolutionary future will bring to our planet and our species. Carl Zimmer, a blogger for a Discover Magazine blog, is one such speculator. He also wrote the final essay in the Origins series celebrating the Darwin Bicentennial for Science magazine, 1 which he entitled Darwinesquely, “On the Origin of Tomorrow.” He made this article publicly available at CarlZimmer.com.

Darwin recognized that as long as the ingredients for the evolutionary process still exist, life has the potential to change. He didn’t believe it was possible to forecast evolution’s course, but he did expect humans would have a big effect. In his day, they had already demonstrated their power with the triumphs of domestication, such as breeding dogs from wolves. Darwin recognized that we humans can also wipe out entire species. He knew the dodo’s fate, and in 1874 he signed a petition to save the last surviving Aldabra giant tortoises on the Seychelles Islands in the Indian Ocean.

Three questions spring up immediately from this paragraph: (1) What possible forecasts could be made for an unforecastable course? (2) is it “wrong” for humans to affect the course of evolution, if they are products of evolution themselves? – or, should we feel any pity for outcomes we may find distasteful, and what is pity anyway? (3) How does human evolutionary influence differ from human designed influence? In other words, can the possible influences of humans on the future of nature and ourselves be just as validly discussed from a creationary perspective? What differentiates the Darwinian dialogue about the future from any given non-Darwinian speculation, and makes it better? If Zimmer preaches any advice to his fellow humans about how they should direct the directionless, what is the moral foundation for it? Let’s see if Zimmer addresses these questions.
Zimmer first recognizes that evolution is unpredictable. The article quotes Yogi Berra, “Prediction is very difficult. Especially about the future.” No qualms so far. Just pack up and go home, then? Not yet –

Yet evolutionary biologists also feel a new sense of urgency about understanding what lies ahead. Since Darwin’s day, humans have gained an unprecedented influence over our own evolution. At the same time, our actions, be it causing climate change, modifying the genomes of other organisms, or introducing invasive species, are creating new sources of natural selection on the flora and fauna around us. “The decisions we and our children make are going to have much more influence over the shape of evolution in the foreseeable future than physical events,” says Andrew Knoll, a paleontologist at Harvard University.

So far, this is just an observation: humans influence change in plants and animals. He has not made any value judgments. He did say that evolutionary biologists feel a new sense of urgency to understand what lies ahead. But then, can one understand something that is unpredictable? It would seem foolish to rush to understand what has one characteristic: unpredictability. That would be like rushing off in all directions. And is it possible to influence the “shape of evolution” when evolution by nature is shapeless? If humans were to shape it with intelligent design, would it still be evolution? We seem lost in conundrums so far.
Evolution is unstoppable.” That’s Lawrence Moran (U of Toronto) speaking. Zimmer explains, “All it means is that the human genome will continue to change from generation to generation.” This is equivalent to “Stuff happens.” It would seem even Zulus and Twitterers know that. Has evolutionary biology improved on this obvious truism? Even creationist John Sanford believes the human genome is changing – he thinks it’s degenerating (see Uncommon Descent).
Zimmer next delves into mutations, natural selection, antibiotic resistance and other cards from the Darwin deck. “Natural selection has not stopped,” he announced. Most young-earth creationists would say, “Amen.” Anyone looking at the palette of skin colors and faces in humans would say, “What else is new?” We’re all still interfertile people. Can Zimmer put some science behind his predictions?

Stearns and his colleagues now know which traits are selected in the women of Framingham, but they have yet to determine exactly what advantage each trait confers—a situation that evolutionary biologists often face when documenting natural selection. Nevertheless, based on the strength of the natural selection they have measured, the scientists predict that after 10 generations, the women of Framingham will give birth, on average, a few months younger than today, have 3.6% lower cholesterol, and will be 1.3% shorter.

Impressive as that may sound, there are already populations of humans with varying ages of puberty, height, and cholesterol. Yet they are still all interfertile people. Unless these variations are yielding some new species, Homo novo, it seems Darwin has little to celebrate. Zimmer hedged his prediction: “Of course, even this prediction is subject to change,” he admitted. A prediction subject to change is no prediction at all. If he predicts that “Stuff will happen,” and some other stuff happens, it’s all just stuff.
Next, Zimmer discussed intelligently-designed evolution: genetic engineering: “eventually scientists will be able to alter the genes of future generations.” Unfortunately, he said, it has little to do with Darwin: “But even if a child was born with engineered genes in our lifetime, that milestone wouldn’t mean much for the evolution of our species.” “Those engineered genes would be swamped by the billions of mutations that emerge naturally in the babies born every year.” The reader is still wondering why this is an essay on Darwin. That is, unless Darwin can improve on the Stuff Happens Law (SHL), which can be taken as a null hypothesis, this is an essay with no direction, no natural law, no predictability, no understanding, and no counsel it’s a point, not a vector – a particle wobbling under Brownian motion.
Ah, but humans are altering the genetics of crops and microbes, he points out. And we might even create organisms from scratch, like Craig Venter is attempting to do. Sadly, that doesn’t give Darwin anything to crow about, either. “If Venter succeeds, his artificial [sic] would be a triumph of human ingenuity, but it would probably be a minor blip on the biosphere’s radar.” (Note: It can be safely assumed Zimmer was not intending to write an essay on intelligent design.)
With his prediction score still at zero (i.e., indistinguishable from “stuff will happen,”) one wonders where Zimmer will turn next. He points out other changes attributable to humans – species altered by fishing, hunting, smokestacks and chainsaws (influences creationists would acknowledge). He points out the impact of invasive species (nothing distinctively Darwinian about that, either Malthusian, maybe, but not Darwinian – the origin of species).
Maybe Stephen Palumbi (Stanford) can help: “In the last century, we were having a big impact, but it wasn’t everywhere,” Palumbi said. “But global climate change is an ‘everywhere’ impact, and that’s different.” Yet even if global climate change is accepted as a human impact, what happens is the SHL, not a law of science with any predictive power or moral imperative.
Zimmer discusses how species are shifting due to global warming. Palumbi steps in again: “We know that things can evolve quickly, but can they evolve fast enough?” The perceptive reader asks, “fast enough for what? For this stuff to happen instead of that stuff?” Zimmer adds, “Unless we can ease up on the biosphere, they warn that the biggest feature of evolution in the near future will be extinctions.” Notice that he did not say humans should ease up on the biosphere, but it seems implied. This hints at some angst in his soul: some desire for species to be preserved – even if they have to evolve into some other species – so that life can continue a little while longer before the sun bloats and fries our planet, and before the universe ultimately chills it out of existence:

A drop in biodiversity may bring with it a collapse of many ecosystems. Coupled with a rapid increase in global temperatures, ocean acidification, and other changes, we may be pushing the environment into a state we’ve never experienced as a civilization. Such a stress could put our species under intense natural selection as well.

Stuff happens, for sure. Stated dispassionately as an observation, this paragraph doesn’t make any value judgments. Indeed, for Zimmer to be consistent, he must stand behind the one-way mirror, clipboard in hand, taking notes. And so he does, continuing onward to the inevitable:

One way or another, life will survive this current crisis. But where is life headed in the very distant future? To find out, planetary scientist King-Fai Li of the California Institute of Technology in Pasadena and his colleagues built a model of Earth and the sun and watched it evolve for billions of years. In their simulation, the sun gets brighter, as it has since it first formed. The extra energy speeds up the rate at which carbon dioxide is drawn out of Earth’s atmosphere, cooling it off. But after about 2 billion years, this cooling mechanism breaks down, and Earth heats up, ending up like its lifeless neighbor, Venus.
But Li’s model does not include a clever species like our own, which can use its brain to influence the planet. Would it be possible to extend the life span of Earth’s biosphere? “I am not going to rule out any talented civilizations that will be able to do that,” says Li.

Surprise! It was an essay on intelligent design after all!

1. Carl Zimmer, “On the Origin of Tomorrow,” Science, 4 December 2009: Vol. 326. no. 5958, pp. 1334-1336, DOI: 10.1126/science.326.5958.1334.


Acknowledgements

C.B. and G.J. were funded by the German Research Foundation (DFG FOR 2237 project ‘Words, Bones, Genes, Tools: Tracking Linguistic, Cultural, and Biological Trajectories of the Human Past’) and the ERC Advanced Grant 324246 EVOLAEMP. D.D. was funded by The Netherlands Organisation for Scientific Research VIDI grant 276-70-022 and the European Institutes for Advanced Study Fellowship Program. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.


Phylogenetic dynamics of Tropical Atlantic Forests

  • Écio Souza Diniz
  • , Markus Gastauer
  • , Jan Thiele
  • & João Augusto Alves Meira-Neto

Evolutionary Ecology (2021)

Ashmole's hypothesis and the latitudinal gradient in clutch size

Biological Reviews (2021)

Methodological advances for hypothesis‐driven ethnobiology

  • Orou G. Gaoue
  • , Jacob K. Moutouama
  • , Michael A. Coe
  • , Matthew O. Bond
  • , Elizabeth Green
  • , Nadejda B. Sero
  • , Bezeng S. Bezeng
  • & Kowiyou Yessoufou

Biological Reviews (2021)

Evolutionary versatility of the avian neck

  • Ryan D. Marek
  • , Peter L. Falkingham
  • , Roger B. J. Benson
  • , James D. Gardiner
  • , Thomas W. Maddox
  • & Karl T. Bates

Proceedings of the Royal Society B: Biological Sciences (2021)

Osmoregulatory ability predicts geographical range size in marine amniotes

  • François Brischoux
  • , Harvey B. Lillywhite
  • , Richard Shine
  • & David Pinaud

Proceedings of the Royal Society B: Biological Sciences (2021)


Introduction

Understanding the origin and adaptive significance of fruit colour has been a lively source of debate for over a century 1,2,3 . While less varied than flower colour globally, fruit colour diversity is nonetheless extensive, spanning and surpassing the human capacity to detect it 4 . Fruit colour diversity has been attributed to phylogenetic constraints, environmental constraints, and protection from antagonists 1,4,5,6 . Yet the oldest, best documented, and most contentious hypothesis for why fruit colour is so diverse centres on its role in attracting seed dispersing mutualists 7 . The disperser hypothesis posits that the colour of fleshy fruits evolved to maximise visual detection by specific animal mutualists to facilitate seed dispersal 8,9 .

Dispersers differ markedly in their visual capacities: birds possess tetrachromatic colour vision 10 . Most mammals are dichromatic, and primates - a major seed disperser in tropical systems – are either dichromats, trichromats or polymorphic (i.e. individuals are either di- or trichromats) 11 . Moreover, frugivores differ in their activity patterns 12 and tendency to rely on non-visual fruit signals and cues 13,14,15,16 . Thus, the disperser hypothesis also predicts that fruits of plant species that rely on dispersal by different frugivores are subject to selective pressures that differ in both their magnitude and direction 17,18 , and would result in different fruit colour.

Despite the breadth of research regarding fruit colour as an adaptation to attracting mutualists, the theory remains highly contentious, and evidence for it is mixed 17,19,20,21 . The ongoing disagreement regarding the adaptive significance of fruit colour diversity may partly stem from the fact that many studies of fruit colour have relied on subjective, human assessments of fruit colour, which means that species are assigned to categories like red, or yellow 3 . Efforts to assess forces and constraints shaping fruit colour variation that rely on subjective human categories thus de facto underestimate the diversity of fruit colour, and further, are likely to miscategorise fruit colours. For example, a fruit categorised as “black” may in fact be reflecting strongly in the ultraviolet (UV) – a range of reflectance that is visually salient to many birds, but invisible to humans 22 .

In addition to the methodological limitations of many studies that aim to understand fruit colour diversity, the number of potential variables affecting fruit colour further impedes efforts to understand its origin and significance. Fruit colour is likely driven by multiple variables including environmental, physiological, and phylogenetic constraints, in addition to the potential selection for maximizing detectability to dispersers 23 . More specifically, it has been proposed that various factors that affect leaf colour such as latitude, temperature, and soil properties may also affect fruit colour 6 . Thus, even when fruit colour is quantified, the potential importance of multiple predictive variables requires an approach that includes the effects of each variable in light of the effects of all relevant variables, including phylogeny, and the potential role of abiotic factors.

Here, we quantify fruit and leaf colour using spectrometric measurements, and apply a comparative approach to examine which factors drive fruit colour variation. We test three hypotheses regarding the source of fruit colour variation, using fruit colour spectra from three tropical systems: (1) Fruit colour is driven by phylogeny. (2) Fruit colour is in fact “plant colour” and is driven by constraints or adaptive response to abiotic factors. If fruit colour is primarily a response to such factors, fruit reflectance should resemble leaf reflectance (3) Fruit colour differs between dispersal syndromes. If fruit colour is under selection to maximise detectability to seed dispersers, plants that rely on frugivores with different visual phenotypes (mammals, birds) or tendency to rely on visual cues will produce fruits which are, on average, differently coloured. Using reflectance samples from ripe fruits and leaves of 97 plant species (Fig. 1), we calculate Phylogenetic Generalised Least Squares models (PGLS) models to test the effects of phylogeny, dispersal mode (mammal, bird, and mixed), and leaf colour on fruit colour, summarised in four variables corresponding to relative reflectance in four colour bands: UV (300–400 nm), blue (400–500 nm), green (500–600 nm), and red (600–700 nm). Crucially, since these three hypotheses are not mutually exclusive, our models include all three to control for their effects and thus identify the independent effects of each factor alone.

Mean fruit and leaf colour reflectance between 300–700 nm for (a) Kibale, (b) Ankarafantsika, and (c) Ranomafana National Parks. Reflectance values were summarised into 2 nm bins and the sum of all values per species is 1.


Why is the null hypothesis of trait evolution Brownian motion? - Biology

You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither BioOne nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.

Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the BioOne website.

A STATISTICAL TEST OF UNBIASED EVOLUTION OF BODY SIZE IN BIRDS

1 Department of Biology, University of Oulu, P.O. Box 3000, FIN-90014 Oulu, Finland [email protected]

Includes PDF & HTML, when available

This article is only available to subscribers.
It is not available for individual sale.

Of the approximately 9500 bird species, the vast majority is small-bodied. That is a general feature of evolutionary lineages, also observed for instance in mammals and plants. The avian interspecific body size distribution is right-skewed even on a logarithmic scale. That has previously been interpreted as evidence that body size evolution has been biased. However, a procedure to test for unbiased evolution from the shape of body size distributions was lacking. In the present paper unbiased body size evolution is defined precisely, and a statistical test is developed based on Monte Carlo simulation of unbiased evolution. Application of the test to birds suggests that it is highly unlikely that avian body size evolution has been unbiased as defined. Several possible explanations for this result are discussed. A plausible explanation is that the general model of unbiased evolution assumes that population size and generation time do not affect the evolutionary variability of body size that is, that micro- and macroevolution are decoupled, which theory suggests is not likely to be the case.

Folmer Bokma "A STATISTICAL TEST OF UNBIASED EVOLUTION OF BODY SIZE IN BIRDS," Evolution 56(12), 2499-2504, (1 December 2002). https://doi.org/10.1554/0014-3820(2002)056[2499:ASTOUE]2.0.CO2

Received: 24 May 2002 Accepted: 30 August 2002 Published: 1 December 2002

This article is only available to subscribers.
It is not available for individual sale.


On Behalf Of Naïveté

The phenomena we see reflect ourselves. When we report what we see, our reports tell much about ourselves - they may even tell more about us than about the phenomena we claim to be observing (Starbuck and Milliken, 1988).

In research, the phenomena we see reflect our analytic procedures. When we report research findings, our reports tell much about our analytic procedures. Our reports may tell more about our analytic procedures than about the phenomena we claim to be analyzing (Starbuck, 1981 Webster and Starbuck, 1988).

This chapter advocates changes in our theorizing and our testing of theories. These changes would help us to formulate more meaningful theories and to evaluate theories more rigorously. Although the issues and prescriptions apply generally, the discussion emphasizes time series because these are central in studies of evolutionary dynamics. A time series is a sequence of observations collected over time - for example, annual counts of steel mills over 30 years.

This first section explains why time series so readily support multiple interpretations, including spurious or deceptive inferences. This ambiguity implies that we should use tough criteria to test theories about series. This section also points out that sustaining a null hypothesis is often more useful than rejecting one, but journals favor studies that do the opposite and they encourage scientists to lie. The subsequent section reviews six reasons organizational scientists should pay serious attention to null or naïve hypotheses that describe behaviors as having large random components. We are trying too hard to invent and show the superiority of causal theories, often complex ones and we too quickly reject simple hypotheses that are more parsimonious. The third section points out how often social scientists test their theories against null hypotheses that they can always reject. Statistical tests would have more import if scientists would test their theories against null models or against naïve hypotheses. (Insert footnote here.)

Ambiguous Reflections of Time

"While the past entertains, ennobles, and expands quite readily, it enlightens only with delicate coaxing." (Fischhoff, 1982: 335)

Those who analyze time series are especially likely to see what their methods dictate. Most time series have high autocorrelations, and autocorrelations foster spurious relations between series.

Ames and Reiter (1961) studied autocorrelations in actual socioeconomic series. They plucked one hundred series at random from Historical Statistics for the United States . Each series spanned the 25 years from 1929 to 1953.

Five sixths of the series had autocorrelations above .8 for a one-year lag, and the mean autocorrelation was .837 for a one-year lag. Even after Ames and Reiter removed linear trends from the series, the mean autocorrelation was .675 for a one-year lag.

Social scientists often calculate correlations between series - for example, the correlation between the number of operating steel mills and Gross National Product. But, high autocorrelations produce high correlations between series that have nothing to do with causal links between those series.

This explains why social scientists find it easy to discover high correlations between series even when series have no direct causal relations (Lovell, 1983 Peach and Webb, 1983). Ames and Reiter correlated randomly selected series. They found a mean (absolute value) correlation of .571 between two series. For 46 percent of the pairs, there existed a time lag of zero to five years that made the two series correlate at least .7.

Ames and Reiter simulated the widespread practice of searching for highly correlated pairs of series. They picked a target series at random, and then compared this series with other series that they also picked randomly. On average, they needed only three trials to draw a second series that appeared to "explain" at least half the variance in a target series. Even after they corrected series for linear trends, they needed only five trials on average to draw a series that seemed to "explain" at least half the variance in a target series.

Ecologists often decompose series conceptually into repeated trials and then try to draw inferences about the processes that generate these series. However, series provide very treacherous grounds for such inferences.

A series that depends causally on its own past values amplifies and perpetuates random disturbances. The process that generates a self-dependent series does not forget random disturbances instantly. Instead, it reacts to random disturbances when generating future outcomes. Replications of such a process produce very different series that diverge erratically from their expected values. An implication is that observed series provide poor evidence about the processes that generated them. A single series or a few replications are very likely to suggest incorrect inferences (Pant and Starbuck, 1990). Gould (1989, 1991) has repeatedly argued that biologists have drawn erroneous inferences about evolution on the basis of improbable, non-recurring, non-replicatable events.

Wold (1965) used computer simulations to show how far self-dependent series diverge from their expected values. He assumed three very simple models, generated series, and then tried to infer which model had generated the series. When he looked at a single instance of a series, inference was hopeless. He could, however, make fairly accurate estimates of central tendencies when he made 200 replications with 100 events in each series - 20,000 observations.

Wold used very simple one-variable equations to generate series. Such simple processes are uncommon in socioeconomic analyses. To simulate the kinds of statistical inferences that social scientists usually try to draw from longitudinal data, I have extended Wold's work by generating autocorrelated series with the properties noted by Ames and Reiter: Each series included a linear time trend and was second-order autocorrelated with an autocorrelation coefficient above .6. Each series included 25 events, a length typical of published studies. Each analysis involved three series: Series Y depended causally on series X but series W was causally independent of both X and Y.

Using accepted procedures for time-series, I then estimated the coefficients of an equation that erroneously hypothesized that Y depended upon both X and W.

The coefficient estimates nearly always showed a statistically significant correlation between Y and W - a reminder of the often ignored injunction that correlation does not equal causation.

The modal coefficient estimates were reasonably accurate most of the errors fell between 10 percent and 50 percent. But estimates of absolutely small coefficients had errors as high as 4,000,000 percent.

Because Wold had shown that replications led to better estimates of central tendencies, I expected replications to allow better estimates of the coefficients. I wanted to find out how many replications one might need to distinguish causal dependence from independence with a misspecified model. To my surprise, replications proved harmful almost as often as helpful: Nearly forty percent of the time, the very first series analyzed produced more-accurate-than-average coefficient estimates, and so replications made the average errors worse. Thus, replication often fostered confidence in the coefficient estimates without increasing the estimates' average accuracy.

These challenges imply that analysts of series should consider alternative theories and draw conservative inferences. Carroll and Delacroix set an excellent example in this respect when they analyzed newspapers' mortality. They considered several alternative explanations for the observed death rates - ecological, economic, political, and idiosyncratic. Then they (1982: 191) warned readers: "On the one hand, our analysis clearly demonstrates that organizational mortality rates vary across a wide range of environmental dimensions, including industry age, economic development, and political turbulence. On the other hand, as in many historical explorations, data often did not allow us to choose among alternative explanations of these findings."

Nearly all socioeconomic series look like artificial series that have three properties (Pant and Starbuck, 1990): First, each new value of a series is a weighted average of the series' previous value and an increment. Second, some series give past values little weight, but most series give past values much weight. Third, the increments are utterly random. Because autocorrelation makes it easy to discover high correlations between such series, it is especially important to use tough criteria for concluding that relations exist.

Then, should not every analysis take as a benchmark the hypothesis that observed events arise primarily from inertia and random perturbation? Should not scientists discard hypotheses that fit the data no better than this naïve one?

In this regard, Administrative Science Quarterly deserves praise for publishing an article in which Levinthal (1991) showed that a type of random walk can generate data in which new organizations have higher failure rates than old ones. Such a random walk does not explain all aspects of organizational survival, but it makes a parsimonious benchmark. When espousing more complex theories, organizational scientists should prove them superior to a naïve model such as Levinthal's.

Warped Reflections in Print

Francis Bacon, Platt (1964), and Popper (1959) have argued persuasively that science makes progress mainly by showing some hypotheses to be incorrect, not by showing that other hypotheses might be correct. Observations may be consistent with many, many hypotheses, only a few of which have been stated. Showing a specific hypothesis to be consistent with the observations only indicates that it is one of the many plausible hypotheses. This should do little to reduce ambiguity, but it is likely to create a premature belief that the tested hypothesis is the best hypothesis.

More dependable progress comes from eliminating poor hypotheses than from sustaining plausible ones. For scientists, it is better to rule out what is not true than to find support for what might be true.

Translated to the domain of conventional statistical tests, this reasoning implies that rejecting a null hypothesis is a weak contribution. Indeed, rejecting a null hypothesis is often trivial, especially in studies with many observations (Webster and Starbuck, 1988). Further, in the social and economic sciences, theories are often so vague and open to so many interpretations that it may be impossible to identify implications of a rejected null hypothesis. A stronger contribution comes from failing to reject a null hypothesis insofar as this rules out some hypotheses.

Of course, in the social and economic sciences, journals show bias in the opposite direction. Journals regularly refuse to print studies that fail to reject null hypotheses, and there is reason to believe many published articles reject null hypotheses that are true (Blaug, 1980 Greenwald, 1975). The only effective way to expose such errors is by failing to replicate prior findings, but journals also decline to publish replications.

Even worse, editors and reviewers regularly urge authors to misrepresent their actual research processes by inventing "hypotheses" after-the-fact, and to portray these "hypotheses" falsely as having been invented beforehand. There is, of course, nothing wrong with inventing hypotheses a posteriori. There would be no point in conducting research if every scientist could formulate all possible true hypotheses a priori. What is wrong is the falsehood. For others to evaluate their work properly, scientists must speak honestly.

It is as if journals were striving to impede scientific progress.

I know a man who has made two studies that failed to reject null hypotheses. In both studies, he devoted much effort to formulating a priori theories about the phenomena. In the second case, this man revised his two a priori theories to accommodate critiques by many colleagues, so the stated hypotheses had rather general endorsement. In both studies, he felt strong commitments to his a priori theories, he tried very hard to confirm them, and he ended up rejecting them only after months of reanalysis.

In the first case, he also made reanalyses to "test" a posteriori hypotheses that journal reviewers proposed: The reviewers advised him to portray these hypotheses falsely as having been formulated a priori and they told him to portray the null hypothesis falsely as an alternative a priori hypothesis.

Because his first study had met such resistance, he did not even attempt to describe the second study forthrightly as a test of two alternative theories against a null hypothesis: Instead, convinced that journals do not want honest reports, he wrote his report as if he had entertained three alternative theories from the outset.

The man has so far not succeeded in publishing either study, although one prominent journal asked for three sets of revisions before finally rejecting the manuscript. Reviewers have complained that the studies failed to reject the null hypotheses, not because the alternative hypotheses are wrong, but because the basic data are too noisy, because the researcher used poor measures, or because the stated hypotheses poorly represent valid general theories.

These complaints are not credible, however. In the first case, the researcher reprocessed the raw data several times, both to improve the accuracy of measures and to meet the objections of reviewers. He also tested, but did not confirm, hypotheses that the reviewers themselves had proposed. In the second case, before gathering data, the researcher had sought extensive input from colleagues so as to make his hypotheses as defensible as possible. Thus, the reviewers seem to be giving methodological reasons for rejecting manuscripts that contradict their substantive beliefs (Mahoney, 1977).

In both studies, after-the-fact reflection suggests that the null hypotheses make very significant statements about the phenomena. That is, after one accepts (albeit reluctantly) the idea that the null hypotheses describe the phenomena well, one sees the phenomena quite differently than past research has done, and one sees opportunities for innovative research in the future. Thus, the reviewers have rejected innovative works having profound implications.

There are many reasons to expect organizations' behaviors to appear somewhat random. Hannan and Freeman (1984: 150) remarked that organizational changes may be "random with respect to future value." Changes may also look random with respect to their nature. This section surveys reasons for this apparent randomness, and thus, reasons why null or naïve hypotheses often fit data well.

The Red Queen's Hypothesis

"Ultimately, evolutionary success for each competitor comes from acquiring tricks, skills, and strategies to evolve faster and more effectively than the competition." (Campbell, 1985:139)

An organization has advantages only in comparison to other organizations. Communication and imitation destroy advantages. When an innovative property spreads throughout a population of organizations, none of the individual organizations has gained an advantage over the others, and no organization has a higher probability of survival. In fact, making organizations more alike would likely lower their survival probabilities by intensifying competition. Similarly, competitors' responses to innovation destroy advantages. When one type of organization adopts an innovative property, competing types adapt to this property so as to neutralize its advantages.

Van Valen (1973) labeled this aspect of biological evolution the Red Queen's Hypothesis. In Lewis Carroll's Alice Through the Looking Glass , the Red Queen tells Alice that she should not expect to have gone anywhere even though she was running just as fast as she could. In Looking Glass Land, said the Red Queen, "you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!"

The analogy of Looking Glass Land applies even more aptly to organizations than to biology: The more visible a property's advantages, the more organizations that can perceive these advantages and the stronger their motivations to adopt similar properties. For instance, Mansfield (1963) found that more profitable innovations get adopted more quickly by more firms. Therefore, the clearer the advantages of an innovative property, the more rapidly it will attract imitators and the more rapidly it will cease offering advantages. The properties most likely to confer persistent advantages are those having highly debatable advantages. When choosing properties to imitate, organizations face tradeoffs of risk versus expected return that resemble the tradeoffs with financial investments.

Organizations may react to the Red Queen's Hypothesis either by imitating other organizations or by trying to innovate. Both reactions make the behavior and performance differences among organizations look more random.

As Carroll (1984: 72) observed: "Recent ecological theory . . . emphasizes the multilineal probabilistic nature of evolution. Thinking has shifted so much in this direction that, as with bioecology, evolution is no longer equated with progress, but simply with change over time." However, ecologists have been trying to achieve this reorientation by treating survival-neutral and survival-degrading changes as random errors (Carroll, 1984: 72-73). Since survival-neutral and survival-degrading changes may themselves be systematic, ecologists' current approach confounds these changes with survival-enhancing ones.

It may prove helpful to classify changes as either systematic, temporary, or random. Besides adjusting to systematic, long-term opportunities and threats, organizations adapt to temporary fads, transient jolts, and accidental disturbances. For instance, Delacroix and Swaminathan (1991) concluded that nearly all organizational changes in the California wine industry are cosmetic, pointless, or speculative.

Organizations' members and environmental evaluators may be unable to distinguish fads and fashions from significant ideas and innovations: These too compete for adherents, and they originate and spread through much the same processes. Just as clothing buyers choose different colors and styles from year to year, so may organizations try new ideas or alter properties for the sake of change, and so may the evaluators of organizations pursue fleeting opportunities or espouse new myths about organizational effectiveness (Abrahamson, 1990).

Pursuing Illusory Opportunities

From the viewpoint of a single organization, the Red Queen's Hypothesis turns opportunities into transient illusions. The organization perceives an opportunity and moves to exploit it. If only one organization were to act alone, the opportunity might be real. However, communication and imitation convert the opportunity into an illusion, and the organizations that try to exploit the opportunity end up no better off - perhaps worse off.

Two theories of firm growth have emphasized illusory opportunities. Andrews (1949) pointed out that firms may expand to obtain short-run cost savings that never become real. From a short-run perspective, managers see some costs as "fixed", meaning that they will not increase if the amount of output goes up incrementally. These fixed costs seem to create opportunities to produce somewhat more output without incurring proportional costs, so managers expect the average cost per unit to decrease as output rises. Yet over the long run, all costs do vary with output. The long-run cost per unit might stay constant or increase as output goes up. Thus, managers might endlessly expand output because they expect growth to decrease average cost, while growth is actually yielding the opposite result.

Penrose (1959: 2) similarly contrasted short-run and long-run perspectives, but she argued, "There may be advantages in moving from one position to another quite apart from the advantages of being in a different position." She (1959: 103) wrote: "The growth of firms may be consistent with the most efficient use of society's resources the result of past growth - the size attained at any time - may have no corresponding advantages. Each successive step in its growth may be profitable to the firm and, if otherwise under-utilized resources are used, advantageous to society. But once any expansion is completed, the original justification for the expansion may fade into insignificance as new opportunities for growth develop and are acted upon."

Because organizations cannot foretell the distant future, a change that promises benefits today often proves damaging the day after tomorrow. Thus, a change today will likely stimulate still other changes to correct its unexpected results, but these may in turn produce more unexpected results (Pant and Starbuck, 1990).

Solving Unsolvable Problems

One interesting and prevalent type of change is an effort to solve an unsolvable problem. Unsolvable problems exist because societies espouse values that are mutually inconsistent. Since organizations embody societal values, they are trying perpetually to satisfy inconsistent demands. Organizational properties that uphold some values conflict with properties that uphold contrary values.

Hierarchical dominance affords an example. Western societies advocate democracy and equality, but they also advocate hierarchical control, unity, and efficiency. People in these societies expect organizations to adopt hierarchical structures, and to use these structures to coordinate their actions and to eliminate waste. But of course, hierarchical control is undemocratic and unequal. Everyone understands why subordinates do not do as their superiors dictate, and everyone also understands why organizations have to eliminate this inefficient disunity.

So, organizations try to solve the "problem" of resistance to hierarchical control - by making hierarchical control less visible or by aligning subordinates' goals with superiors' goals. In the late 1940s, the solution was for managers to manage "democratically." But after a while, most subordinates inferred that their superiors' democracy was insincere, and this solution failed. In the early 1950s, the solution was for managers to exhibit "consideration" while nevertheless controlling task activities. But after a while, most subordinates inferred that their superiors' consideration was insincere, and this solution failed. In the late 1950s, the solution was Management-By-Objectives, in which superiors and subordinates were to meet periodically and to formulate mutually agreed goals for the subordinates. But after a while, most subordinates inferred that their superiors were using these meetings to dictate goals, and this solution failed. In the 1960s, the solution was "participative management," in which workers' representatives were to participate in managerial boards that made diverse decisions about methods, investments, staffing, strategies, and so on. But after a while, most workers inferred (a) that managers were exerting great influence upon these boards' decisions and (b) that the workers' representatives were benefiting personally from their memberships in these boards, and this solution failed. In the early 1980s, the solution was "organizational culture," by which organizations were to produce unity of goals and methods. But after a while, most managers learned that general solidarity did not translate into operational goals and methods, and employees resisted homogenization, and this solution failed. In the late 1980s, the solution became "quality circles," which broadened into "total quality management." But after a while, . . .

Thus, one fad has followed another. From a short-run perspective, many organizations adopted very similar "solutions" and from a long-run perspective, many organizations have adopted loosely similar "solutions". Although the various solutions have affected superior-subordinate relations, these effects have been negative as often as positive and the fundamental "problem" persists. Long-run changes in the fundamental problem and in the various solutions seem to have arisen from economics, education, social structure, and technologies rather than from intraorganizational practices. So the fads' effects look weak and largely random. From a very-long-run perspective, organizations seem to have tried a series of unsuccessful practices.

Unsolvable problems also exist because organizations' overall goals encompass inconsistent subgoals: To achieve more general goals, organizations must pursue subgoals that call for conflicting actions.

An example is profit maximization. Firms try both to obtain as much revenue as possible and to keep costs as low as possible. To maximize revenues, the marketing personnel want their firms to produce customized products that are just what the customers want and to have these available whenever the customers want them. To minimize costs, the production personnel want to minimize inventories and machine downtime, so they want to deliver the same product to every customer, or at least to produce different products in optimal quantities on efficient schedules. One outcome is that the marketing personnel and the production personnel often disagree about what to do and when to do it. Conflicts are unpleasant, however, so firms attempt "conflict resolution". Conflict resolution can at best improve short-run symptoms because these conflicts are intrinsic to profit maximization.

Such intraorganizational conflicts tend to generate solutions over time that vary around long-run equilibria. Deviations from equilibrium occur as first one side and then the other scores a point. Should one side score repeatedly, pushing the joint solution well off-center, higher management has to intervene and rebalance power. Although there is a need for long-run balance, the short-run moves around that balance point may look erratic and make no sense from a long-run perspective. Thus, a random walk might describe the short-run moves well. A random walk would be even more likely to describe the moves well if these were aggregated across several firms: Even if all the firms were dealing with the same technologies and selling in the same market, the intraorganizational conflicts in different firms would generate different solutions at each time.

A random walk may also accurately and efficiently describe moves produced by the interactions of multiple forces that act independently. Such moves resemble Brownian motion - the erratic moves of a dust particle in the air, which one can see in a sunbeam. Molecules of air collide with a dust particle constantly and from all sides: During an instant, unequal numbers of molecules hit the particle from different directions, and the dust particle responds by moving in one direction or another. The dust particle's moves are not literally random. One might, in principle, explain them with a complex model that accounts for multitude air molecules. It is more practical, however, to describe the moves as a random walk about a slowly drifting equilibrium. Randomness serves as a concise summary of complexity.

To generate moves that generally resemble a random walk, there need not be multitude forces. Fewer forces have the effect of many forces if each force varies in intensity from time to time. Moves in one dimension - say, higher or lower - might look random if just a few forces act in each direction. Causal processes of this sort pervade the behavioral and social sciences.

Self-dependent, inertial causal processes propagate random perturbations, and may even amplify them. A random event does not merely affect a single period it becomes part of the foundation for future periods. The effects of random perturbations may accumulate over time until they dominate the behavior of a causal process. Thus, one random fleeting perturbation may instigate a persistent series of consequences that fabricate the appearance of a systematic pattern over a fairly long period. The more inertial a process the longer each random perturbation can affect its actions, and Ames and Reiter found that socioeconomic series have a mean autocorrelation of .837.

Iterative processes can also produce the appearance of randomness even though no random events affect them. One property that can yield this result is nonlinear feedback. For example, computers use completely deterministic, simple calculations to generate pseudo-random numbers. These numbers are not actually random each repetition of a calculation produces precisely the same pseudo-random numbers. Yet these impostors are what we use to create the appearance of randomness. Indeed, in nature, nonlinear deterministic processes may generate many of the phenomena that appear to be stochastic (Mandelbrot, 1983).

Many organizational actions do not reflect current, identified needs or goals. Although some actions arise from problem solving, other actions arise from action generating. Indeed, action generating is probably much more prevalent than problem solving. In their action generating mode, organizations follow programs thoughtlessly. The programs may be triggered by calendars, clocks, job assignments, or unimportant information (Starbuck, 1983).

For instance, strategic planning departments gather and distribute information, make forecasts, hold meetings, set goals, and discuss alternatives whether or not they face specific strategic problems, whether or not their current strategies seem to be succeeding or failing, whether or not strategic planning is likely to prove useful to them. They probably do these things according to an annual calendar that is independent of the timing or appearance of strategic issues. Although one might expect to observe relations between the efforts devoted to strategic planning and the contexts in which it occurs, such relations in the short-run might be quite independent of such relations over the long run. That is, in the long run, firms might discard planning practices that appear useless or harmful, so practices might reflect properties of industries, markets, or technologies. Yet, such selection would be quite noisy because of the long delays between strategic actions and their results, and because of the loose connections between planning practices and strategic actions. In the short run, firms might try ideas experimentally, so their practices would be influenced by interpersonal networks, consultants, or the press. Thus, short-run changes in planning practices ought to have insignificant long-run effects and to be independent of their long-run value.

Organizations regularly make changes that look partially random. In some cases, it is short-run changes that look random in other instances, systematic short-run patterns seem to produce random long-run changes. In some cases, performance outcomes seem to be random in other instances, it is behaviors that look random.

Because organizations' environments react in ways that counteract any gains one organization makes at the expense of others, organizations' behaviors ought to appear random when interpreted in frameworks that emphasize competitive advantages. Because organizations cannot forecast accurately far into the future, their behaviors ought to appear random when interpreted in frameworks that emphasize long-run goals. Because problem analyses assume illusory gains and because some problems have no solutions, organizations repeatedly take actions that have no long-run results. Because actions reflect multiple independent forces, short-run actions may look erratic and inexplicable. Because causal processes feedback into themselves, random events have long-lasting effects and nonrandom processes may produce the appearance of randomness. Because some actions arise from action generators, they have no bases in immediate problems or goals. Because organizations attend to different issues, see different environments, and act at different times, aggregating across several organizations makes behaviors look more random, at least when interpreted in frameworks that emphasize short-run causes or effects.

Thus, organizational scientists should take seriously null or naïve hypotheses that describe behaviors as having large random components. Such hypotheses do not imply that organizations see their actions as random. Hypotheses that emphasize randomness may be parsimonious even when complete, deterministic descriptions would be possible. If simple hypotheses can describe behaviors rather accurately, one must then decide whether the gains from complex, causal descriptions are worthwhile.

Scientists should also take seriously the possibility that complex and seemingly causal relations occur by accident. Because so few empirical "findings" are replicated successfully, scientists need to remain aware of the high probability that observed patterns result from idiosyncratic data. Null or naïve hypotheses that describe behaviors as having large random components provide insurance against idiosyncratic data because they make weak assumptions about statistical properties such as symmetry, independence, and uncorrelated residuals.

WHY PLAY CROQUET WHEN YOU CAN'T LOSE?

Very often, social scientists "test" their theories against ritualistic null hypotheses (a) that the scientists can always reject by collecting sufficient data and (b) that the scientists would not believe no matter what data they collect and analyze. As proofs of knowledge, such statistical significance tests look ridiculous. Such tests are not merely silly, however, because they turn research into a noise generator that fills journals, textbooks, and courses with substantively meaningless "findings."

Scientists can be certain of rejecting any point null hypothesis that defines an infinitesimal point on a continuum. The hypothesis that two sample means are exactly equal is a point hypothesis. Other examples include these null hypotheses:

regression coefficient = 0

All two-tailed hypotheses about continuous variables are tested against point null hypotheses.

The key property of a point hypothesis is that the probability of rejecting it goes to 1 as observations accumulate. If a point hypothesis has not already been rejected, the scientist has only to make more observations (or to make more simulation runs). Thus, passing a "hypothesis test" against such a null hypothesis tells little about the alternative hypothesis, but much about either the scientist's ability to formulate meaningful statements or the scientist's perseverance and effort.

Also, point null hypotheses usually look implausible if one treats them as genuine descriptions of phenomena (Gilpin and Diamond, 1984). For instance, some contingency theorists have assumed that randomly different organizational structures are the only alternative to structures that vary with environmental uncertainty. Do these contingency theorists really believe that no other factors - such as technology - influence organizational structures nonrandomly?

Ritualistic hypothesis tests resemble the croquet game in Wonderland: The Queen of Hearts, said Alice, "is so extremely likely to win that it is hardly worthwhile finishing the game." If a theory can only win a ritualistic competition, it would be better to leave the theory unpublished.

A Proposal from Bioecology

Bioecologists have been debating whether to replace null hypotheses. Connor and Simberloff (1983, 1986) argued that interactions within ecological communities make statistical tests based upon simple null hypotheses too unrealistic. They proposed that bioecologists replace null hypotheses with "null models." Connor and Simberloff (1986: 160) defined: "A null model is an attempt to generate the distribution of values for the variable of interest in the absence of a putative causal process." That is, one uses a "null model" to generate a statistical distribution, and then one asks whether the observed data have high or low probabilities according to this distribution.

For example, different islands in the Galapagos hold different numbers of species of land birds: These numbers might reflect competition between species, physical differences among the islands, or vegetation differences. Using a "null model" that ignored competition between species, Connor and Simberloff (1983) estimated the statistical distributions of the numbers of species pairs on islands in the Galapagos: Their estimates assumed that each island held the number of species observed on it and that each species inhabited as many islands as observed, but that species were otherwise distributed randomly and independently. All the observed numbers of species pairs fell within two standard deviations of the means in the distributions implied by this null model, and most observations were close to the expected values. So, Connor and Simberloff inferred that competition between species had little effect on the numbers of species of Galapagos land birds.

Not surprisingly, some bioecologists have voiced strong reservations about "null models" (Harvey et al., 1983). Among several points of contention, Gilpin and Diamond (1984) argued (a) that "null models" are not truly null because they make implicit assumptions and (b) that they are difficult to reject because fitting coefficients to data removes randomness. Gilpin and Diamond do have a point, in that describing such models as "null" might create false expectations. Connor and Simberloff's "null model", for instance, took as premises the observed numbers of species on each island and of islands inhabited by each species. These numbers allow for some physical and vegetation differences among the islands, and Gilpin and Diamond noted that these numbers might reflect competition between species as well. On the other hand, Connor and Simberloff (1986: 161) pointed out that scientists can choose null models that virtually guarantee their own rejection: "For this null model, and for null models in general, if one is unwilling to make assumptions to account for structure in the data that can reasonably be attributed to causal processes not under investigation, then posing and rejecting null hypotheses will be trivially easy and uninteresting."

Computers play a key role in this debate. The distributions computed by Connor and Simberloff would have required superhuman effort before 1950. One of the original reasons for using point null hypotheses was algebraic feasibility. Because statisticians had to manipulate statistical distributions algebraically, they built analytic rationales around algebraically amenable distributions. Computers, however, give scientists means to generate statistical distributions that represent more complicated assumptions. It is no longer necessary use the distributions published in textbooks.

Rather than compare the data with two-standard-deviation confidence limits, as Connor and Simberloff did, however, it is more sensible to compute a likelihood ratio of the kind described below. Although Connor and Simberloff's approach has the advantage of looking like a traditional hypothesis test, it entails the parallel disadvantage of treating truth as a binary variable. A model is either true or false. Likelihood ratios allow one to treat truth as a continuous variable - one model may be more true than another, and yet both of the compared models may be unlikely.

An alternative and related proposal derives from forecasting. Because they regularly confront autocorrelated time series, forecasting researchers usually disdain null hypotheses and instead compare their forecasts with "naïve forecasts".

For example, a naïve person might advance either of two hypotheses about a series. One naïve hypothesis - the no-change hypothesis - says that the next value will be the same as the current value. This hypothesis makes no specific assertions about the causal processes that generate the series. It merely expresses the idea that most causal processes are inertial: What happens tomorrow will resemble what is happening today. The second naïve hypothesis - the linear-trend hypothesis - says the trend observed since yesterday will continue until tomorrow: The next value will differ from the current value by the same amount that the current value differs from the previous value. This hypothesis expresses the idea that most causal processes are inertial in trend as well as in state.

Neither of these naïve hypotheses says anything profound. Either could come from a young child who has no understanding of the causal processes that generate a series. So, one should expect more accurate predictions from a complicated forecasting technique - or from a scientific theory that supposedly does say something profound.

Comparing focal hypotheses with naïve hypotheses instead of null hypotheses gives the comparisons more credibility and more substantive significance. In this volume, Ginsberg and Baum (1993) compare their theory to the foregoing naïve hypotheses. Had they stopped after merely testing the null hypotheses that acquisition rates do not vary with diverse variables, we would know only that their theory is better than nothing. However, they also show that their theory fits the data distinctly better than naïve statements about inertia. I find this impressive.

Note that Ginsberg and Baum might have made an even more useful contribution if it had turned out that their theory was no more accurate than the naïve models. Such an outcome would have shown that a simple explanation - inertia - works very well. As it happens, of course, this is not the case. Inertia is not a powerful explanation for these data. But that is the main inference we ought to draw from the superiority of Ginsberg and Baum's theory. We should not infer that their theory is correct. Not only may their theory not be correct in detail, it may not even take account of the most important causal factors. There may be several other explanations that would be even more effective.

Also note that some naïve hypotheses, including the two above, are point hypotheses. If one uses them like null hypotheses, conventional statistical tests too will inevitably disconfirm them. So instead, one calculates the likelihood ratio (Jeffreys and Berger, 1992):

Probability (Data if the focal hypotheses are true)
Probability (Data if the naïve hypothesis is true)

If the focal hypotheses work better than the naïve hypothesis, the ratio will be substantially greater than one. One must then ask whether the ratio is large enough to justify the greater complexity associated with the focal hypotheses.

Such comparisons usually have more meaning if one states the likelihood ratios on a per-trial or per-period basis. For example, one can use the nth root of

Probability (Data if the focal hypotheses are true)
Probability (Data if the naïve hypothesis is true)

where n denotes the number of time periods.

Simple Competitors May Win in the Future

Ginsberg and Baum's comparisons with naïve hypotheses make their theory look impressive because improving on naïve hypotheses is difficult. Substantive theories often turn out to be no better than naïve hypotheses.

Ginsberg and Baum do not, however, test their theory with predictions. Naïve hypotheses generally look much better when used to make genuine predictions about the future than when used to rationalize the past (Pant and Starbuck, 1990 Starbuck 1983).

Since the 1950s, macroeconomists have invested enormous resources in trying to create complex, mathematical, statistically estimated theories that predict short-run phenomena well. The teams that developed these models included some of the world's most respected economists, and they spent hundreds of man-years. They used elegant statistical methods. They did not lack financial or computation resources, for the U. S. Government has spent many millions of dollars for data gathering and research grants. Major industrial firms pay large sums for the predictions generated by these models. So, these models represent the very best in economic or social forecasting.

Elliott (1973) tested the predictive accuracies of four of the best-known macroeconomic models. Of course, all these models had been published originally with evidence of their predictive accuracies, but these demonstrations had involved postdicting the very data from which the models' coefficients had been estimated, and each model had been fitted against data from different periods. Elliott fitted all four models to data from the same period, then measured their accuracies in predicting subsequent events. Three models turned out to be as accurate as the no-change hypothesis. The simplest of the models, which was the most accurate, was as accurate as the linear-trend hypothesis.

The findings of Makridakis and colleagues (1979 1982) resemble those of Elliott. They compared 24 statistical forecasting methods by forecasting 1001 series. They found that no-change hypotheses beat others 38 to 64 percent of the time. Also, no-change hypotheses were less likely to make large errors than any other method. Yet, the most accurate forecasts came from exponential smoothing, which beat every other method at least 50 percent of the time. Exponential smoothing is a version of the linear-trend hypothesis it assumes data include random noise and it filters this noise by averaging. The averaging usually gives more weight to newer data.

Research findings may tell more about analytic procedures than about the phenomena being studied. We need to state theories more meaningfully and to evaluate theories more rigorously.

Series are central in studies of evolutionary dynamics, and we should use especially tough criteria to test theories about them. Observed series provide poor evidence about the processes that generated them, and they offer many opportunities for spurious or deceptive inferences. A self-dependent series does not forget random disturbances instantly it reacts to random disturbances when generating future outcomes. Replications produce very different series that diverge erratically from their expected values.

Sustaining a null hypothesis is more useful than rejecting a null hypothesis. Rejecting a null hypothesis does little to reduce ambiguity, and it is often a trivial achievement. It does not prove the value of the alternative hypothesis. A stronger contribution comes from failing to reject a null hypothesis insofar as this rules out at least one ineffective alternative hypothesis. Yet, journals work in the opposite direction. They reject studies that do not reject null hypotheses, and they do not publish replications. Editors and reviewers urge authors to lie by portraying after-the-fact "hypotheses" as having been invented beforehand.

Null or naïve hypotheses often fit organizational data well because organizations make changes having insignificant long-run effects. Organizations' behaviors should appear random with respect to competitive advantages because organizations' environments try to counteract the gains one organization makes at the expense of others. Organizations' behaviors should appear random with respect to long-run goals because organizations cannot forecast far into the future. Short-run actions may look erratic and inexplicable because actions reflect multiple independent forces. Random events may have long-lasting effects and nonrandom processes may produce apparent randomness because causal processes feedback into themselves. Organizations repeatedly take actions having no long-run results because problem analyses seek illusory gains and because some problems have no solutions. Many actions have no bases in immediate problems or goals because they come from action generators. Aggregating across organizations makes behaviors look more random because organizations attend to different issues, see different environments, and act at different times.

Much too often, social scientists "test" their theories against null hypotheses that the scientists can always reject by collecting enough data. Such tests turn research into a generator of substantively meaningless "findings."

Scientists who are willing to gather sufficient data can reject any null hypothesis that specifies an infinitesimal point on a continuum. The probability of rejecting a point hypothesis goes to 1 as the observations grow numerous. Also, most point null hypotheses look implausible if one treats them as genuine descriptions of phenomena.

One response to this situation is to formulate "null models" that incorporate some simple assumptions about the data. One uses the null models to generate statistical distributions and compares the data with these distributions.

An alternative response is to compare focal hypotheses with naïve hypotheses. A naïve hypothesis represents one or two basic ideas of the sort that a naïve person might advance. Theories often turn out to be no better than naïve hypotheses - especially when both are used to predict future events. However, some naïve hypotheses are also point hypotheses. In such cases, rather than formulate the analysis as a significance test, scientists should use likelihood ratios to compare the alternative hypotheses.

Living in the Best of All Possible Worlds

We are trying too hard to show the superiority of complex causal theories, while too quickly rejecting simple null or naïve hypotheses that say behavior has large random components. We are, indeed, so eager to discern causality that we embrace a hollow statistical methodology, we spurn replication, and we refuse to publish articles that interpret events simply. Although social construction of reality is a pervasive phenomenon, it may not be a useful foundation for scientific research.

"Master Pangloss taught the metaphysico-theologico-cosmolo-nigology. He could prove to admiration that there is no effect without a cause and, that in this best of all possible worlds, the Baron's castle was the most magnificent of all castles, and My Lady the best of all possible baronesses.

"It is demonstrable, said he, that things cannot be otherwise than they are for as all things have been created to some end, they must necessarily be created for the best end. Observe, for instance, the nose is formed for spectacles, therefore we wear spectacles. The legs are visibly designed for stockings, accordingly we wear stockings. Stones were made to be hewn, and to construct castles, therefore My Lord has a magnificent castle for the greatest baron in the province ought to be the best lodged. Swine were intended to be eaten, therefore we eat pork all the year round and they, who assert that everything is right, do not express themselves correctly they should say that everything is best.

"Candide listened attentively, and believed implicitly. . . ." (Voltaire, Chapter I, Part I of Candide or, the Optimist, 1756)

This manuscript has benefited from the useful suggestions of Eric Abrahamson, Joel Baum, Jacques Delacroix, Charles Fombrun, Theresa Lant, Jim March, and John Mezias.

Abrahamson, E. (1990). Fads and Fashions in Administrative Technologies . Doctoral dissertation, New York University.

Ames, E., & Reiter, S. (1961). Distributions of correlation coefficients in economic time series. Journal of the American Statistical Association , 56: 637-656.

Andrews. P. W. S. (1949). A reconsideration of the theory of the individual business. Oxford Economics Papers , 1: 54-89.

Blaug, M. (1980). The Methodology of Economics: Or How Economists Explain . Cambridge: Cambridge University Press.

Campbell, J. H. (1985). An organizational interpretation of evolution. In D. J. Depew and B. H. Weber (Eds.), Evolution at a Crossroads: The New Biology and the New Philosophy of Science : 133-168. Cambridge, MA: MIT Press.

Carroll, G. R. (1984). Organizational ecology. Annual Review of Sociology , 10: 71-93.

Carroll, G. R., & Delacroix, J. (1982). Organizational mortality in the newspaper industries of Argentina and Ireland: An ecological approach. Administrative Science Quarterly , 27: 169-198.

Connor, E. F., & Simberloff, D. (1983). Interspecific competition and species co-occurrence patterns on islands: Null models and the evaluation of evidence. Oikos , 41: 455-465.

Connor, E. F., & Simberloff, D. (1986). Competition, scientific method, and null models in ecology. American Scientist , 74, 155-162.

Delacroix, J., & Swaminathan, A. (1991) Cosmetic, speculative, and adaptive organizational change in the wine industry: A longitudinal study. Administrative Science Quarterly , 36: 631-661.

Elliott, J. W. (1973). A direct comparison of short-run GNP forecasting models. Journal of Business , 46, 33-60.

Fischhoff, B. (1980). For those condemned to study the past: Heuristics and biases in hindsight. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment Under Uncertainty: Heuristics and Biases : 335-351. Cambridge: Cambridge University Press.

Gilpin, M. E., & Diamond, J. M. (1984). Are serious co-occurrences on islands non-random, and are null hypotheses useful in community ecology? In D. R. Strong & others, Ecological Communities: Conceptual Issues and the Evidence : 297-315. Princeton: Princeton University Press.

Ginsberg, A., and Baum, J. A. C. (1993?). Evolutionary processes and patterns of core business change. In Evolutionary Dynamics of Organizations , J. A. C. Baum and J. V. Singh (Eds.). New York: Oxford University Press. (This volume)

Gould, S. J. (1989). Wonderful Life: The Burgess Shale and the Nature of History . New York: W. W. Norton.

Gould, S. J. (1991). Bully for Brontosaurus: Reflections on Natural History . New York: W. W. Norton.

Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin , 82: 1-20.

Hannan, M. T., and Freeman, J. H. (1984). Structural inertia and organizational change. American Sociological Review , 29: 149-164.

Harvey, P. H., Colwell, R. K., Silvertown, J. W., & May, R. M. (1983). Null models in ecology. Annual Review of Ecology and Systematics , 14: 189-211.

Jeffreys, W. H., & Berger, J. O. (1992). Ockham's Razor and Bayesian analysis. American Scientist , 80: 64-72.

Levinthal, D. (1991). Random walks and organizational mortality. Administrative Science Quarterly , 36: 397-420.

Lovell, M. C. (1983). Data mining. Review of Economics and Statistics , 65, 1-12.

Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research , 1: 161-175.

Makridakis, S., & Hibon, M. (1979). Accuracy of forecasting: An empirical investigation. Journal of the Royal Statistical Society, Series A , 142, 97-145. (Reprinted in S. Makridakis, A. Andersen, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, & R. Winkler, The Forecasting Accuracy of Major Time Series Methods : 35-101. Chichester: Wiley, 1984.)

Makridakis, S., Andersen, A., Carbone, R., Fildes, R., Hibon, M., Lewandowski, R., Newton, J., Parzen, E., & Winkler, R. L. (1982). The accuracy of extrapolation (time series) methods: Results of a forecasting competition. Journal of Forecasting , 1, 111-153. (Reprinted in S. Makridakis, A. Andersen, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, & R. Winkler, The Forecasting Accuracy of Major Time Series Methods : 103-165. Chichester: Wiley, 1984.)

Mandelbrot, B. B. (1983). The Fractile Geometry of Nature . New York: W. H. Freeman.

Mansfield, E. (1963). The speed of response of firms to new techniques. Quarterly Journal of Economics , 77: 290-311.

Pant, P. N., & Starbuck, W. H. (1990). Innocents in the forest: Forecasting and research methods. Journal of Management , 16: 433-460.

Peach, J. T., & Webb, J. L. (1983). Randomly specified macroeconomic models: Some implications for model selection. Journal of Economic Issues , 17, 697-720.

Penrose. E. T. (1959). The Theory of the Growth of the Firm . New York: Wiley.

Platt, J. R. (1964). Strong inference. Science , 146: 347-353.

Popper, K. R. (1959). The Logic of Scientific Discovery . New York: Basic Books.

Starbuck, W. H. (1981). A trip to view the elephants and rattlesnakes in the garden of Aston. In Perspectives on Organization Design and Behavior , A. H. Van de Ven and W. F. Joyce (Eds.): 167-198. New York: Wiley-Interscience.

Starbuck, W. H. (1983). Organizations as action generators. American Sociological Review , 48: 91-102.

Starbuck, W. H., & Milliken, F. J. (1988). Executives' perceptual filters: What they notice and how they make sense. In D. C. Hambrick (Ed.) The Executive Effect: Concepts and Methods for Studying Top Managers : 35-65. Greenwich, CT: JAI Press.

Van Valen, L. (1973). A new evolutionary law. Evolutionary Theory , 1: 1-30.

Webster, J., & Starbuck, W. H. (1988). Theory building in industrial and organizational psychology. In C. L. Cooper, & I. Robertson (Eds.) International Review of Industrial and Organizational Psychology 1988 : 93-138. Chichester: Wiley.


Why is the null hypothesis of trait evolution Brownian motion? - Biology

Rethinking Genetic Determinism
With only 30,000 genes, what is it that makes humans human?
By Paul H. Silverman

For more than 50 years scientists have operated under a set of seemingly
incontrovertible assumptions about genes, gene expression, and the
consequences thereof. Their mantra: One gene yields one protein genes beget
messenger RNA, which in turn begets protein and most critically, the gene
is deterministic in gene expression and can therefore predict disease
propensities.

Yet during the last five years, data have revealed inadequacies in this
theory. Unsettling results from the Human Genome Project (HGP) in particular
have thrown the deficiencies into sharp relief. Some genes encode more than
one protein others don't encode proteins at all. These findings help refine
evolutionary theory by explaining an explosion of diversity from relatively
little starting material. We therefore need to rethink our long-held
beliefs: A reevaluation of the genetic determinism doctrine, coupled with a
new systems biology mentality, could help consolidate and clarify
genome-scale data, enabling us finally to reap the rewards of the genome
sequencing projects.

UNEXPECTED RESULTS In the mid- and late 1980s, our testimony before the
congressional committees controlling HGP purse strings relied upon our old
assumptions.1 In describing the genome's potential medical value, we
elevated the status of the gene in human development and by extension, human
health. At the same time, the deterministic nature of the gene entered the
social consciousness with talk of "designer" babies and DNA police that
could detect future criminals.

Armed with DNA determinism, scientific entrepreneurs convinced venture
capitalists and the lay public to invest in multi-billion-dollar enterprises
whose aim was to identify the anticipated 100,000-plus genes in the human
genome, patent the nucleotide sequences, and then lease or sell that
information to pharmaceutical companies for use in drug discovery. Prominent
among these were two Rockville, Md.-based companies, Celera, under the
leadership of J. Craig Venter, and Human Genome Sciences, led by William
Haseltine.

But when the first draft of the human genome sequence was published in the
spring of 2001, the unexpectedly low gene count (less than 30,000) elicited
a hasty reevaluation of this business model. On a genetic level, humans, it
seems, are not all that different from flies and worms.

Or maybe they are, if we can assume that genes are not strictly
deterministic. As Venter et al. reported in their genome manuscript: "A
single gene may give rise to multiple transcripts, and thus multiple
distinct proteins with multiple functions by means of alternative splicing
and alternative transcription initiation and termination sites."2

The industry shakeup was predictable. Celera, Human Genome Sciences, and
most of the other genomic sequencing firms refocused their business plans
and downsized. Venter resigned as president of Celera, and Haseltine has
indicated his intention to do the same.

Read the rest at The Scientist
http://www.the-scientist.com/yr2004/may/research3_040524.html

Posted by
Robert Karl Stonjek.

The Scientist is not a credible magazine.

As Larry Moran has also indicated, this is complete nonsense. Venter's
resignation and the shakeout at Celera had nothing to do with the number
of genes found. As I understand it (admittedly from a distance) it had a lot
to do with the failure of Celera to find a way to make money from its
investiment in sequencing of the human genome. This would be an issue whether
there turned out to be 10,000 genes in the human genome or 100,000.

If the rest of the article makes as much sense as this part, one can
safely ignore its grand prouncements.

JF:-
As Larry Moran has also indicated, this is complete nonsense. Venter's
resignation and the shakeout at Celera had nothing to do with the number
of genes found. As I understand it (admittedly from a distance) it had a
lot
to do with the failure of Celera to find a way to make money from its
investiment in sequencing of the human genome. This would be an issue
whether
there turned out to be 10,000 genes in the human genome or 100,000.
If the rest of the article makes as much sense as this part, one can
safely ignore its grand prouncements.

JE:-
Prof. Felsenstein does know of Haldane's dilemma.
He must know that the dilemma was the direct result
of the human genome being required to be much larger
that it was and the dilemma was only "solved" when
the genome was found to be small. However nobody has
revised basic population genetics assumptions to be
able to explain how such a tiny genome could
code for all the heritable human phenotypes.
Prof. Felsenstein cannot ignore the simple logic that
without a heritable system of genetic epistasis
30-40,000 genes cannot code for all the heritable
phenotypes unless a 2nd layer of genetic code
that is not DNA/RNA based exists above the DNA/RNA
level.

John Edser
Independent Researcher
(posting from Bonn Germany)

P Box 266
Church Pt
NSW 2105
Australia

LM:-
This is total nonsense. I've been teaching about genes that don't
make mRNA or proteins for 25 years and I learned about them long before
that.

RJS:-
One gene can only yield one polypeptide,
but these polypeptides can be woven into
more than just the one protein. Many proteins
commonly constitute one selectable phenotype, e.g.
haemoglobin the red pigment found in the blood which
carries both oxygen and carbon dioxide. Thus
most protein phenotypes are coded for by more than
one gene. Since selection is only possible at
the _phenotypic_ level, mostly, groups of genes are
_dependently_ selected (selected using the same
phenotype). Thus, for the overwhelming majority
of genomic genes being "selfish" cannot pay unless
that gene's selfishness is mutual to at least all the
other genes that code for the one phenotype
being selected.

LM:-
I've also been teaching students about genes that encode more than one
protein for 25 years.

JE:-
Incorrect. One gene codes for one polypeptide.
As Dr Moran well knows, a protein is not the
same biological entity as a polypeptide.
More than one polypeptide normally constitutes
a protein and mostly, more than one protein
constitutes one selectable phenotype. Gene
centric Neo Darwinists prefer to oversimplify
this situation in their attempt to delete
epistasis which is defined as _non_ additive.
This just means all the genes that _must_
be selected _together_ because they code for the one
selectable phenotype. Apparently, these should not
be confused with separate somatic phenotypes which
are regarded as selectively independent, i.e."additive",
e.g. polygenes for human height. Thus the genes that
code for the proteins that constitute the
protein haemoglobin are _dependently_
selected at that phenotype but the proteins that form
the walls of the artery are supposed to constitute an
_independent_ somatic phenotype. Clearly this is the
case re: body phenotypes but is not the case re: gene
fitness because haemoglobin and the proteins that
form an artery wall are FITNESS DEPENDENT on each other.
A good haemoglobin protein in a blocked artery
is selected against because that artery may reduce
the _total_ number of _fertile_ forms reproduced into
_one_ population by the unfortunate Darwinian selectee
who has the faulty artery. A clean artery is
selected against by sickle cell anemia for a similar
reason. In fact ALL then proteins that form one Darwinian
selectee (one fertile form) are FITNESS DEPENDENT.
This is clearly seen when two phenotypes incorporate
the same gene that coded for just the one polypeptide.
Here, just that one gene would be independently
selected at TWO separate phenotypes as well
as being dependently selected at two phenotypes if
more than one gene was requited to code for each
independent phenotype.

Since the human genome only has 30,000 or so genes
but has millions of heritable and thus selectable
phenotypes, the same genes must be dependently selected
at more than just one phenotype, i.e. genetic epistasis
must dominate entirely, human genetics. Yet, non epistasis
is still regarded as "inherited" but "not heritable" and
thus, "non selectable".

Like it or not, genomic genes form _dependent_ selective
_webs_ and not just independent selective chains. The
Neo Darwinistic model of independently selectable
genomic genes is not real. Not a single independently
selectable genomic gene has ever been documented within
nature. The Neo Darwinian model of independently
selectable genes gave birth to the model that
just random events that change gene freq. of such
genes can now be regarded as "evolution", e.g. genetic
drift. They cannot. Such changes can only be validly
regarded as temporal variation. To do so is to _grossly_
misuse an oversimplified model. A model is not a theory,
it is always, a simplification/over simplification of an
existing theory. Such a model can never be validly used
to contest and win against the theory it was simplified
from. To do so is to turn science into a Mad Hatter's Tea
Party.

JE:-
The argument is that genetic epistasis
must provide the basis for genetic fitness.
The gene centric Neo Darwinistic view that
fitness epistasis is "inherited" but "not heritable"
must be discarded. Redefining ftness epistasis as
additive simply deletes all fitness epistasis and
is yet another Mad Hatter solution because
not a single independent in fitness genomic
gene has ever been documented within nature.

LM:-
This soounds a lot like new-age doublespeak. Who is this guy?

JE:-
It is Dr Moran who employs "new-age doublespeak",
i.e. who employs post modern epistemology because
he regards just a random process as being testably
causative to something when it remains impossible
to test for such causation. At all times, random
patterns can validly be supposed to be caused by
_either_ random or non random processes. Thus
the observation of just a random pattern such as
a random genetic drift is NEVER definitive
to just a random process.

LM:-
Most experts thought there would be fewer than 50,000 genes. They turned
out to be right.

JE:-
Starting from Haldane's Dilemma and moving on,
most _required_ 100,000's of genes because genetic
epistasis was not allowed as heritable and thus
selectable, information.

If we restrict all known heritable human phenotypes
to just 1 million then each gene codes for, on average,
33.3 phenotypes. How is it possible to delete genetic
epistasis from such a situation? Clearly nature is
not just using one polypeptide within one selectable
phenotype she is being much more efficient. If such
events remain regarded as "inherited" but "not heritable"
and thus "selectable", how can the system work with
just 30,000 or so genes?

LM:-
No experts were surprised at this result.

JE:-
Dr Moran's comment is simply not credible.
Haldane's dilemma was only finally solved,
after the human genome project demonstrated
that gene centric calculations based on
Fisher and Haldane's assumptions: only
additive gene associations are heritable
and thus selectable (requiring an enormous
genome to be able to code for all the
heritable differences between man and chimp),
was discovered to be redundant. A tiny genome
more efficiently used, i.e. that allowed more than
one gene that codes for one polypeptide to be
used within more than just one selected phenotype is
the only possible solution if DNA/RNA is
alone considered to provide genetic code. Of course
it has been known for over 40 years that more
than just DNA/RNA systems provides a genetic code.
Sonniborn demonstrated that the cell cortex
can provide heritable information that was not
based on DNA/RNA. He rearranged
a patch cilia in single celled forms to be
cut out and reversed so the cell swam in circles.
This was passed on over countless generations.
Prions form a "prima face" case for proteins coding
for protein folding information. If a 2nd layer
of heritable material exists above the DNA/RNA
level then the two systems acting _together_ would
provide a _levered_ system of inheritance, i.e. a
system where tiny changes provide many different
codes only requiring a tiny DNA/RNA base.

The blinkered view of gene centric Neo
Darwinists that bean bag population genetics
can provide all the answers is not credible. Such
models are useful but have been painfully misused.

LM:-
I think Venter and Haseltine were probably kidnapped by aliens or maybe
they've been murdered by the Masons. It's certain that there has to be
some kind of world-wide conspiracy in order to explain all these strange
happenings.

JE:-
No, just gene centric Neo Darwinistic intransigence.
Dr Moran refuses to take a non biased view and allows
non testable theory as science to maitain his position.

LM:-
The Scientist is not a credible magazine.

JE:-
Yes, only the non testable dictates of
Dr Moran et al are allowed as credible.

John Edser
Independent Researcher

PO Box 266
Church Pt
NSW 2105
Australia

RJS:-
Rethinking Genetic Determinism
With only 30,000 genes, what is it that makes humans human? By Paul H.
Silverman
For more than 50 years scientists have operated under a set of seemingly
incontrovertible assumptions about genes, gene expression, and the
consequences thereof. Their mantra: One gene yields one protein genes
beget messenger RNA, which in turn begets protein and most critically,
the gene is deterministic in gene expression and can therefore predict
disease propensities.

LM:-
This is total nonsense. I've been teaching about genes that don't make
mRNA or proteins for 25 years and I learned about them long before that.

RJS:-
One gene can only yield one polypeptide, but these polypeptides can be
woven into more than just the one protein. Many proteins commonly
constitute one selectable phenotype, e.g. haemoglobin the red pigment
found in the blood which carries both oxygen and carbon dioxide. Thus
most protein phenotypes are coded for by more than one gene. Since
selection is only possible at the _phenotypic_ level, mostly, groups of
genes are _dependently_ selected (selected using the same phenotype).
Thus, for the overwhelming majority of genomic genes being "selfish"
cannot pay unless that gene's selfishness is mutual to at least all the
other genes that code for the one phenotype being selected.

RJK:-
Yet during the last five years, data have revealed inadequacies in this
theory. Unsettling results from the Human Genome Project (HGP) in
particular have thrown the deficiencies into sharp relief. Some genes
encode more than one protein others don't encode proteins at all.

LM:-
I've also been teaching students about genes that encode more than one
protein for 25 years.
JE:-
Incorrect. One gene codes for one polypeptide. As Dr Moran well knows, a
protein is not the same biological entity as a polypeptide. More than
one polypeptide normally constitutes a protein and mostly, more than one
protein constitutes one selectable phenotype.

RAGLAND:
The article states, "One gene can only yield one polypeptide, but these
polypeptides can be woven into more than just the one protein." Is this
true? Is the question one gene can only yield one polypeptide framed
incorrectly in light of genetic epistasis? If genes are not the
causative factor in selecting phenotype then how can we possibly be able
to unravel the secrets of our DNA? Isn't it likely at least with some
genetic diseases the "gene-centric" model will prove itself? Perhaps
some traits are more epistatic than others. For example, traits such as
aggression and intelligence are arguably epistatic based. Certainly, it
can be argued science will come up with genetic engineering which will
eliminate certain genetic diseases before we can genetically engineer
certain aspects of intelligence and aggression.

JE:
Gene centric Neo Darwinists prefer to oversimplify this situation in
their attempt to delete epistasis which is defined as _non_ additive.
This just means all the genes that _must_ be selected _together_ because
they code for the one selectable phenotype. Apparently, these should not
be confused with separate somatic phenotypes which are regarded as
selectively independent, i.e."additive", e.g. polygenes for human
height. Thus the genes that code for the proteins that constitute the
protein haemoglobin are _dependently_
selected at that phenotype but the proteins that form the walls of the
artery are supposed to constitute an _independent_ somatic phenotype.
Clearly this is the case re: body phenotypes but is not the case re:
gene fitness because haemoglobin and the proteins that form an artery
wall are FITNESS DEPENDENT on each other. A good haemoglobin protein in
a blocked artery is selected against because that artery may reduce the
_total_ number of _fertile_ forms reproduced into _one_ population by
the unfortunate Darwinian selectee who has the faulty artery. A clean
artery is selected against by sickle cell anemia for a similar reason.
In fact ALL then proteins that form one Darwinian selectee (one fertile
form) are FITNESS DEPENDENT. This is clearly seen when two phenotypes
incorporate the same gene that coded for just the one polypeptide. Here,
just that one gene would be independently selected at TWO separate
phenotypes as well as being dependently selected at two phenotypes if
more than one gene was requited to code for each independent phenotype.
Since the human genome only has 30,000 or so genes but has millions of
heritable and thus selectable phenotypes, the same genes must be
dependently selected at more than just one phenotype, i.e. genetic
epistasis must dominate entirely, human genetics. Yet, non epistasis is
still regarded as "inherited" but "not heritable" and thus, "non
selectable".

RAGLAND:
You write, "Yet non-epistasis is still regarded as "inherited" but not
"heritable" and thus "non-selectable". Don't you mean epistasis is still
regarded as inherited but not heritable and thus non-selectable? What is
the difference between inherited and heritable? Isn't this splitting
hairs?

JE:
Like it or not, genomic genes form _dependent_ selective _webs_ and not
just independent selective chains. The Neo Darwinistic model of
independently selectable genomic genes is not real. Not a single
independently selectable genomic gene has ever been documented within
nature.

RAGLAND:
What about the gene for sickle cell anemia?

JE:
The Neo Darwinian model of independently selectable genes gave birth to
the model that just random events that change gene freq. of such genes
can now be regarded as "evolution", e.g. genetic drift. They cannot.
Such changes can only be validly regarded as temporal variation.

RAGLAND:
What is the difference between random genetic drift and temporal
variation?

JE:
To do so is to _grossly_ misuse an oversimplified model. A model is not
a theory, it is always, a simplification/over simplification of an
existing theory. Such a model can never be validly used to contest and
win against the theory it was simplified from. To do so is to turn
science into a Mad Hatter's Tea Party.

RKS:-
These findings help refine evolutionary theory by explaining an
explosion of diversity from relatively little starting material.

JE:-
The argument is that genetic epistasis
must provide the basis for genetic fitness. The gene centric Neo
Darwinistic view that fitness epistasis is "inherited" but "not
heritable" must be discarded. Redefining ftness epistasis as additive
simply deletes all fitness epistasis and is yet another Mad Hatter
solution because not a single independent in fitness genomic gene has
ever been documented within nature.

RAGLAND"
I think sickle cell anemia is an exception. It conferred protection
against malaria. I tend to agree with what your saying but the reply is
"So what"? Science is currently not advanced enough to understand
genetic epistasis and to be able to influence it. It's much like my
writing about aggression. We know hardly nothing about its biological
underpinnings. Perhaps you are writing to attempt to create a shift in
attitude away from the neo-Darwinian gene-centric view but I would argue
that will only come when enough scientific discoveries have been made it
discredit it.

RKS:-
We therefore need to rethink our long-held beliefs: A reevaluation of
the genetic determinism doctrine, coupled with a new systems biology
mentality, could help consolidate and clarify genome-scale data,
enabling us finally to reap the rewards of the genome sequencing
projects.

LM:-
This soounds a lot like new-age doublespeak. Who is this guy?

JE:-
It is Dr Moran who employs "new-age doublespeak", i.e. who employs post
modern epistemology because he regards just a random process as being
testably causative to something when it remains impossible to test for
such causation.

RAGLAND:
What are Dr. Moran's views on genetic epistasis versus neo-Darwinian
gene-centrism. Does he take sides or does he accomodate both views. Can
both views be reconciled with each other?

JE:
At all times, random patterns can validly be supposed to be caused by
_either_ random or non random processes. Thus the observation of just a
random pattern such as a random genetic drift is NEVER definitive to
just a random process.

RAGLAND:
I think Dr. Moran understands natural selection and random drift are not
necessarily mutually exclusive processes.

snip<
RJK:-
Armed with DNA determinism, scientific entrepreneurs convinced venture
capitalists and the lay public to invest in multi-billion-dollar
enterprises whose aim was to identify the anticipated 100,000-plus genes
in the human genome, patent the nucleotide sequences, and then lease or
sell that information to pharmaceutical companies for use in drug
discovery.

LM:-
Most experts thought there would be fewer than 50,000 genes. They turned
out to be right.

JE:-
Starting from Haldane's Dilemma and moving on, most _required_ 100,000's
of genes because genetic epistasis was not allowed as heritable and thus
selectable, information.
If we restrict all known heritable human phenotypes to just 1 million
then each gene codes for, on average, 33.3 phenotypes. How is it
possible to delete genetic epistasis from such a situation? Clearly
nature is not just using one polypeptide within one selectable phenotype
she is being much more efficient. If such events remain regarded as
"inherited" but "not heritable" and thus "selectable", how can the
system work with just 30,000 or so genes?

RAGLAND:
Mr. Edser have you asked Moran what his views on genetic epistasis are?
I get the impression Dr. Moran doesn't beleve in genetic epistasis from
reading your response. I have not read what Dr, Moran's views are on
this so it would be helpful for him to respond to your tar and
feathering.

RJK:-
Prominent among these were two Rockville, Md.-based companies, Celera,
under the leadership of J. Craig Venter, and Human Genome Sciences, led
by William Haseltine.
But when the first draft of the human genome sequence was published in
the spring of 2001, the unexpectedly low gene count (less than 30,000)
elicited a hasty reevaluation of this business model. On a genetic
level, humans, it seems, are not all that different from flies and
worms.

LM:-
No experts were surprised at this result.

JE:-
Dr Moran's comment is simply not credible. Haldane's dilemma was only
finally solved, after the human genome project demonstrated that gene
centric calculations based on
Fisher and Haldane's assumptions: only
additive gene associations are heritable and thus selectable (requiring
an enormous genome to be able to code for all the
heritable differences between man and chimp), was discovered to be
redundant. A tiny genome more efficiently used, i.e. that allowed more
than one gene that codes for one polypeptide to be used within more than
just one selected phenotype is the only possible solution if DNA/RNA is
alone considered to provide genetic code. Of course it has been known
for over 40 years that more than just DNA/RNA systems provides a genetic
code. Sonniborn demonstrated that the cell cortex can provide heritable
information that was not based on DNA/RNA. He rearranged
a patch cilia in single celled forms to be cut out and reversed so the
cell swam in circles. This was passed on over countless generations.
Prions form a "prima face" case for proteins coding for protein folding
information. If a 2nd layer of heritable material exists above the
DNA/RNA level then the two systems acting _together_ would provide a
_levered_ system of inheritance, i.e. a system where tiny changes
provide many different codes only requiring a tiny DNA/RNA base.
The blinkered view of gene centric Neo
Darwinists that bean bag population genetics can provide all the answers
is not credible. Such models are useful but have been painfully misused.

RJS:-
Or maybe they are, if we can assume that genes are not strictly
deterministic. As Venter et al. reported in their genome manuscript: "A
single gene may give rise to multiple transcripts, and thus multiple
distinct proteins with multiple functions by means of alternative
splicing and alternative transcription initiation and termination
sites."2 The industry shakeup was predictable. Celera, Human Genome
Sciences, and most of the other genomic sequencing firms refocused their
business plans and downsized. Venter resigned as president of Celera,
and Haseltine has indicated his intention to do the same.

LM:-
I think Venter and Haseltine were probably kidnapped by aliens or maybe
they've been murdered by the Masons. It's certain that there has to be
some kind of world-wide conspiracy in order to explain all these strange
happenings.

JE:-
No, just gene centric Neo Darwinistic intransigence. Dr Moran refuses to
take a non biased view and allows non testable theory as science to
maitain his position.

RAGLAND:
Perhaps the human genome project hasn't delivered as some thought it
would. These things take time. I think there is much that we don't know.
Undoubtedly, some of what we think is correct today will have to be
revised in the future. The important thing is not to be like the flat
earth believers and when science does make new discoveries attempt to
suppress it because it doesn't fit our preconceived notions.

RJS:-
Read the rest at The Scientist
http://www.the-scientist.com/yr2004/may/research3_040524.html

LM:-
The Scientist is not a credible magazine.

JE:-
Yes, only the non testable dictates of
Dr Moran et al are allowed as credible.

RAGLAND:
Dr. Moran can come across as arrogant and belittling but he should, if
he cares to, to respond to your accusations against him.

Even in the clear-cut case this may be partly dreaming.

For one recessive gene and one dominant, my Punnet square gave

25% of
offspring had traits determined, 75% it was pot-luck (amongst all the
combined possibilities).

That's interesting though because how would that be known at meiosis.

Of course it may look like that in a population, but that's like saying
because a gas looks continuous there can't be atoms (Brownian motion?).

I feel here that must is a little strong an assertion. overall genes
would tend to group together but that is why it is selection not
determination.

Mendel's peas were bred pure first which is almost never the case.

Co-dominance and incomplete dominance could occur even for independently
selectable genes (and could be important unexpectedly).

So even independent selection would not produce definite results in all
cases.

Temporal variation would probably occur naturally and keep a population
in a non-equilibrium state (not random, not homogeneous, but ORDERED)
that may allow it to adapt more quickly.

Random genetic drift (if it is allowed at all, perhaps in the presence
of strong mutagens) would confer a poorer ability to adapt unless the
population was lucky a) in the degree of drift b) in the particular
effects of the drift.

Yes to the extent that it is impossible to produce anything remotely
approaching truly random genetic drift even experimentally never mind
theoretically.

However a model (even very simplified) that includes "crossover" etc.
can be shown to behave in quantifiably different ways to
mutate-and-select alone.

Such a model can hint (only), however, that a theory may be correct and
sometimes models are the best we can do until we have understood more
(which the model can help).

Why not put it to the test, then rethink?

Race a deterministic genome against a less deterministic one.

(it may involve making a model)

But random drift would certainly NOT map one-to-one onto selected
traits, since we don't see random characteristics.

So the question really is whether random drift is a good NULL hypothesis
even if we don't believe it. And it may well be as good as we can get.
However it is difficult to do science with.

The thrust of your response seems to be against random genetic drift as
a evolutionary mechanism. I'm not a scientist and I will leave it to
possibly others on s.b.e. to defend the evidence random genetic drift. I
will say you appear to be out of the mainstream in that most biologists
accept not only natural selection but random genetic drift as processes
of evolution and indeed some think random genetic drift occurs more so
than natural selection. The two processes are not necessarily exclusive
but am simplifying to make a point. It's not my impression random
genetic drift is a "null hypothesis". the term "null hypothesis" is
something of an oxymoron. Furthermore, your statements "random genetic
drift (if it is allowed at all, perhaps in the presence of strong
mutagens) would confer a poorer ability to adapt unless the population
was lucky a) in the degree of drift b) in the particular effects of the
drift and "to the extent that it is impossible to produce anything
remotely approaching truly random genetic drift even experimentally
never mind theoretically" are black and white and arrogant. Who ever
stated Darwinian evolution was perfect or even necessarily adaptable? Do
you think natural selection is the only mode of evolution?

My own unscientific view is natural selection, although it has been
weakened in modern times, is still the predominant force in evolution.
But I do believe in the evidence for random genetic drift and this is a
very difficult concept to teach beginners. I know I don't totally
understand it.

It's my belef natural selection is retarded. It has not kept up with the
scientific and technological advances which have been made. For example,
natural selection was certainly behind selecting the trait of
aggression. In caveman days this trait was necessary for survival.
Today, we no longer face the animal predators we did in our prehistoric
past. Yet, aggression is still imprinted on our DNA but instead of
killing large game animals and other predators we prey on each other.
Furthermore, following Malthus we overreproduce and this leads to
competition. Darwinian evolution is incompatible with the continuance of
civilization and the human race. That is why genetic engineering, as
embryonic as it is, is the only solution. Regrettably, it won't happen
in my lifetime and I can see things steadily getting worse.

Here below is an article on random genetic drift. What specifically do
you think makes random genetic drift null?

Random Genetic Drift
Copyright © 1993-1997 by Laurence Moran
[Last Update: January 22, 1993]

The two most important mechanisms of evolution are natural selection and
genetic drift. Most people have a reasonable understanding of natural
selection but they don't realize that drift is also important. The anti-
evolutionists, in particular, concentrate their attack on natural
selection not realizing that there is much more to evolution. Darwin
didn't know about genetic drift, this is one of the reasons why modern
evolutionary biologists are no longer "Darwinists". (When
anti-evolutionists equate evolution with Darwinism you know that they
have not done their homework!)

Random genetic drift is a stochastic process (by definition). One aspect
of genetic drift is the random nature of transmitting alleles from one
generation to the next given that only a fraction of all possible
zygotes become mature adults. The easiest case to visualize is the one
which involves binomial sampling error. If a pair of diploid sexually
reproducing parents (such as humans) have only a small number of
offspring then not all of the parent's alleles will be passed on to
their progeny due to chance assortment of chromosomes at meiosis. In a
large population this will not have much effect in each generation
because the random nature of the process will tend to average out. But
in a small population the effect could be rapid and significant.
Suzuki et al. explain it as well as anyone I've seen

"If a population is finite in size (as all populations are) and if a
given pair of parents have only a small number of offspring, then even
in the absence of all selective forces, the frequency of a gene will not
be exactly reproduced in the next generation because of sampling error.
If in a population of 1000 individuals the frequency of "a" is 0.5 in
one generation, then it may by chance be 0.493 or 0.0505 in the next
generation because of the chance production of a few more or less
progeny of each genotype. In the second generation, there is another
sampling error based on the new gene frequency, so the frequency of "a"
may go from 0.0505 to 0.501 or back to 0.498. This process of random
fluctuation continues generation after generation, with no force pushing
the frequency back to its initial state because the population has no
"genetic memory" of its state many generations ago. Each generation is
an independent event. The final result of this random change in allele
frequency is that the population eventually drifts to p=1 or p=0. After
this point, no further change is possible the population has become
homozygous. A different population, isolated from the first, also
undergoes this random genetic drift, but it may become homozygous for
allele "A", whereas the first population has become homozygous for
allele "a". As time goes on, isolated populations diverge from each
other, each losing heterozygosity. The variation originally present
within populations now appears as variation between populations."
(Suzuki, D.T., Griffiths, A.J.F., Miller, J.H. and Lewontin, R.C. in An
Introduction to Genetic Analysis 4th ed. W.H. Freeman 1989 p.704)
Of course random genetic drift is not limited to species that have few
offspring, such as humans. In the case of flowering plants, for example,
the stochastic element is the probabilty of a given seed falling on
fertile ground while in the case of some fish and frogs it is the result
of chance events which determine whether a newly hatched individual will
survive. Drift is also not confined to diploid genetics it can explain
why we all have mitochondria that are descended from those of a single
women who lived hundreds of thousands of years ago.

"This does not mean that there was a single female from whom we are all
descended, but rather that out of a population numbering perhaps several
thousand, by chance, only one set of mitochondrial genes was passed on.
(This finding, perhaps the most surprising to us, is the least disputed
by population geneticists and others familiar with genetic drift and
other manifestations of the laws of probability.)" (Curtis, H. and
Barnes, N.S. in Biology 5th ed. Worth Publishers 1989 p. 1050.)
But random genetic drift is even more that this. It also refers to
accidental random events that influence allele frequency. For example,
"Chance events can cause the frequencies of alleles in a small
population to drift randomly from generation to generation. For example,
consider what would happen if [a]. wildflower population . consisted
of only 25 plants.

Assume that 16 of the plants have the genotype AA for flower color, 8
are Aa, and only 1 is aa. Now imagine that three of the plants are
accidently destroyed by a rock slide before they have a chance to
reproduce. By chance, all three plants lost from the population could be
AA individuals. The event would alter the relative frequency of the two
alleles for flower color in subsequent generations. This is a case of
microevolution caused by genetic drift.
"Disasters such as earthquakes, floods, or fires may reduce the size of
a population drastically, killing victims unselectively. The result is
that the small surviving population is unlikely to be representative of
the original population in its genetic makeup - a situation known as the
bottleneck effect. Genetic drift caused by bottlenecking may have
been important in the early evolution of human populations when
calamities decimated tribes. The gene pool of each surviving population
may have been, just by chance, quite different from that of the larger
population that predated the catastrophe." (Campbell, N.A. in Biology
2nd ed. Benjamin/Cummings 1990 p.443)

Several examples of bottlenecks have been inferred from genetic data.
For example, there is very little genetic variation in the cheetah
population. This is consistant with a reduction in the size of the
population to only a few individuals - an event that probably occurred
several thousand years ago. An observed example is the northern elephant
seal which was hunted almost to extinction. By 1890 there were fewer
than 20 animals but the population now numbers more than 30,000. As
predicted there is very little genetic variation in the elephant seal
population and it is likely that the twenty animals that survived the
slaughter were more "lucky" than "fit".

Another example of genetic drift is known as the founder effect. In this
case a small group breaks off from a larger population and forms a new
population. This effect is well known in human populations

"The founder effect is probably responsible for the virtually complete
lact of blood group B in American Indians, whose ancestors arrived in
very small numbers across the Bering Strait during the end of the last
Ice Age, about 10,000 years ago. More recent examples are seen in
religious isolates like the Dunkers and Old Order Amish of North
America. These sects were founded by small numbers of migrants from
their much larger congregations in central Europe. They have since
remained nearly completely closed to immigration from the surrounding
American population. As a result, their blood group gene frequencies are
quite different from those in the surrounding populations, both in
Europe and in North America.

"The process of genetic drift should sound familiar. It is, in fact,
another way of looking at the inbreeding effect in small populations .
Whether regarded as inbreeding or as random sampling of genes, the
effect is the same. Populations do not exactly reproduce their genetic
constitutions there is a random component of gene-frequency change."
(Suzuki et al. op. cit.)

There are many well studied examples of the founder effect. All of the
cattle on iceland, for example, are descended from a small group that
were brought to the island more than one thousand years ago. The genetic
make-up of the icelandic cattle is now different from that of their
cousins in Norway but the differences agree well with those predicted by
genetic drift. Similarly, there are many pacific islands that have been
colonized by small numbers of fruit flies (perhaps one female) and the
genetics of these populations is consistant with drift models.

Thus, it is wrong to consider natural selection as the ONLY mechanism of
evolution and it is also wrong to claim that natural selection is the
predominant mechanism. This point is made in many genetics and evolution
textbooks, for example

"In any population, some proportion of loci are fixed at a selectively
unfavorable allele because the intensity of selection is insufficient to
overcome the random drift to fixation. Very great skepticism should be
maintained toward naive theories about evolution that assume that
populations always or nearly always reach an optimal constitution under
selection. The existence of multiple adaptive peaks and the random
fixation of less fit alleles are integral features of the evolutionary
process. Natural selection cannot be relied on to produce the best of
all possible worlds." (Suzuki, D.T., Griffiths, A.J.F., Miller, J.H. and
Lewontin, R.C. in An Introduction to Genetic Analysis 4th ed., W.H.
Freeman, New York 1989)

And:
"One of the most important and controversial issues in population
genetics is concerned with the relative importance of genetic drift and
natural selection in determining evolutionary change. The key question
at stake is whether the immense genetic variety which is observable in
populations of all species is inconsequential to survival and
reproduction (ie. is neutral), in which case drift will be the main
determinant, or whether most gene substitutions do affect fitness, in
which case natural selection is the main driving force. The arguments
over this issue have been intense during the past half- century and are
little nearer resolution though some would say that the drift case has
become progressively stronger. Drift by its very nature cannot be
positively demonstrated. To do this it would be necessary to show that
selection has definitely NOT operated, which is impossible. Much
indirect evidence has been obtained, however, which purports to favour
the drift position. Firstly, and in many ways most persuasively is the
molecular and biochemical evidence. " (Harrison, G.A., Tanner, J.M.,
Pilbeam, D.R. and Baker, P.T. in Human Biology 3rd ed. Oxford University
Press 1988 pp 214-215)
The book by Harrison et al. is quite interesting because it goes on for
several pages discussing the controversy. The authors point out that it
is very difficult to find clear evidence of selection in humans (the
sickle cell allele is a notable exception). In fact, it is difficult to
find good evidence for selection in most organisms - most of the
arguments are after the fact (but probably correct)!

The relative importance of drift and selection depends, in part, on
estimated population sizes. Drift is much more important in small
populations. It is important to remember that most species consist of
numerous smaller inbreeding populations called "demes". It is these
demes that evolve.

Studies of evolution at the molecular level have provided strong support
for drift as a major mechanism of evolution. Observed mutations at the
level of gene are mostly neutral and not subject to selection. One of
the major controversies in evolutionary biology is the
neutralist-selectionist debate over the importance of neutral mutations.
Since the only way for neutral mutations to become fixed in a population
is through genetic drift this controversy is actually over the relative
importance of drift and natural selection.

A professor asked a student, "If you had a choice between the oppressed
and the oppressor which would you choose." The student replied,
"Neither". The Professor shook his head and stated, "You don't have a
choice." The student paused and said, "The oppressed".


Evolution and Selection of Quantitative Traits

Quantitative traits-be they morphological or physiological characters, aspects of behavior, or genome-level features such as the amount of RNA or protein expression for a specific gene-usually show considerable variation within and among populations. Quantitative genetics, also referred to as the genetics of complex traits, is the study of such characters and is based on mathematical models of evolution in which many genes influence the trait and in which non-genetic factors may also be important.

Evolution and Selection of Quantitative Traitspresents a holistic treatment of the subject, showing the interplay between theory and data with extensive discussions on statistical issues relating to the estimation of the biologically relevant parameters for these models. Quantitative genetics is viewed as the bridge between complex mathematical models of trait evolution and real-world data, and the authors have clearly framed their treatment as such. This is the second volume in a planned trilogy that summarizes the modern field of quantitative genetics, informed by empirical observations from wide-ranging fields (agriculture, evolution, ecology, and human biology) as well as population genetics, statistical theory, mathematical modeling, genetics, and genomics. Whilst volume 1 (1998) dealt with the genetics of such traits, the main focus of volume 2 is on their evolution, with a special emphasis on detecting selection (ranging from the use of genomic and historical data through to ecological field data) and examining its consequences.



Comments:

  1. Faebar

    I am am excited too with this question. You will not prompt to me, where I can find more information on this question?

  2. Halliwell

    There is something in this. Got it, thanks for your help on this issue.

  3. Mezishicage

    I apologise, but, in my opinion, you are not right. I am assured. I can defend the position.

  4. Floyd

    I am aware of this situation. Enter we'll discuss.



Write a message