12: Beyond Birth-Death models - Biology

12: Beyond Birth-Death models - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In this chapter I discussed models that go beyond constant rate birth-death models. In some cases, very different models predict the same pattern in phylogenetic trees, warranting some caution until direct fossil data can be incorporated. I also described a model of protracted speciation, where speciation takes some time to complete. This latter model is potentially better connected to microevolutionary models of speciation, and could point towards fruitful directions for the field. We know that simple birth-death models do not capture the richness of speciation and extinction across the tree of life, so these models that range beyond birth and death are critical to the growth of comparative methods.

  • 12.1: Capturing Variable Evolution
    Simple, constant-rate birth-death models are not adequate to capture the complexity and dynamics of speciation and extinction across the tree of life. Speciation and extinction rates vary through time, across clades, and among geographic regions. We can sometimes predict this variation based on what we know about the mechanisms that lead to speciation and/or extinction. This chapter explores some extensions to birth-death models that allow us to explore diversification in more detail.
  • 12.2: Variation in Diversification Rates across Clades
    We know from analyses of tree balance that the tree of life is more imbalanced than birth-death models predict. We can explore this variation in diversification rates by allowing birth and death models to vary along branches in phylogenetic trees. The simplest scenario is when one has a particular prediction about diversification rates to test. For example, we might wonder if diversification rates in one clade are higher than in the rest of the phylogenetic tree.
  • 12.3: Variation in Diversification Rates through Time
    In addition to considering rate variation across clades, we might also wonder whether birth and/or death rates have changed through time. For example, perhaps we think our clade is an adaptive radiation that experienced rapid diversification upon arrival to an island archipelago and slowed as this new adaptive zone got filled. This hypothesis is an example of density-dependent diversification, where diversification rate depends on the number of lineages that are present
  • 12.4: Diversity-Dependent Models
    Time-dependent models in the previous section are often used as a proxy to capture processes like key innovations or adaptive radiations. Many of these theories suggest that diversification rates should depend on the number of species alive in a certain time or place, rather than time . Therefore, we might want to define speciation rate in a truly diversity dependent manner rather than using time as a proxy.
  • 12.5: Protracted Speciation
    In all of the diversification models that we have considered so far, speciation happens instantly; one moment we have a single species, and then immediately two. But this is not biologically plausible. Speciation takes time, as evidenced by the increasing numbers of partially distinct populations that biologists have identified in the natural world. Furthermore, the fact that speciation takes time can have a profound impact on the shapes of phylogenetic trees.
  • 12.S: Beyond Birth-Death Models (Summary)

Classical mathematical models for description and prediction of experimental tumor growth

Despite internal complexity, tumor growth kinetics follow relatively simple laws that can be expressed as mathematical models. To explore this further, quantitative analysis of the most classical of these were performed. The models were assessed against data from two in vivo experimental systems: an ectopic syngeneic tumor (Lewis lung carcinoma) and an orthotopically xenografted human breast carcinoma. The goals were threefold: 1) to determine a statistical model for description of the measurement error, 2) to establish the descriptive power of each model, using several goodness-of-fit metrics and a study of parametric identifiability, and 3) to assess the models' ability to forecast future tumor growth. The models included in the study comprised the exponential, exponential-linear, power law, Gompertz, logistic, generalized logistic, von Bertalanffy and a model with dynamic carrying capacity. For the breast data, the dynamics were best captured by the Gompertz and exponential-linear models. The latter also exhibited the highest predictive power, with excellent prediction scores (≥80%) extending out as far as 12 days in the future. For the lung data, the Gompertz and power law models provided the most parsimonious and parametrically identifiable description. However, not one of the models was able to achieve a substantial prediction rate (≥70%) beyond the next day data point. In this context, adjunction of a priori information on the parameter distribution led to considerable improvement. For instance, forecast success rates went from 14.9% to 62.7% when using the power law model to predict the full future tumor growth curves, using just three data points. These results not only have important implications for biological theories of tumor growth and the use of mathematical modeling in preclinical anti-cancer drug investigations, but also may assist in defining how mathematical models could serve as potential prognostic tools in the clinic.

Conflict of interest statement

The authors have declared that no competing interests exist.


Figure 1. Volume measurement error.

Figure 1. Volume measurement error.

A. First measured volume y 1 against second one y…

A. First measured volume y 1 against second one y 2. Also plotted is the regression line (correlation coefficient R = 0.98, slope of the regression = 0.96). B. Error against approximation of the volume given by the average of the two measurement . The χ 2 test rejected Gaussian distribution of constant variance () C. Histogram of the normalized error applying the error model given by with α = 0.84 and Vm = 83 mm 3 . It shows Gaussian distribution (χ 2 test not rejected, p = 0.196) with standard deviation .

Figure 2. Descriptive power of the models…

Figure 2. Descriptive power of the models for lung and breast tumor data.

Figure 3. Examples of predictive power.

Figure 3. Examples of predictive power.

Representative examples of the forecast performances of the models…

Figure 4. Prediction depth and number of…

Figure 4. Prediction depth and number of data points.

Predictive power of some representative models…

Predictive power of some representative models depending on the number of data points used for estimation of the parameters (n) and the prediction depth in the future (d). Top: at position (n,d) the color represents the percentage of successfully predicted animals when using n data points and forecasting the time point , as quantified by the score (multiplied by 100), defined in (17). This proportion only includes animals having measurements at these two time points, thus values at different row d on the same column n or reverse might represent predictions in different animals. White squares correspond to situations where this number was too low (<5) and thus success score, considered not significant, was not reported. Bottom: distribution of the relative error of prediction (20), all animals and (n,d) pooled together. Models were ranked in ascending order of overall mean success score reported in Tables 5 and 6. A. Lung tumor data. B. Breast tumor data.

Figure 5. A priori information and improvement…

Figure 5. A priori information and improvement of prediction success rates.

Predictions were considered when…

Predictions were considered when randomly dividing the animals between two equal groups, one used for learning the parameters distribution and the other for prediction, using n = 3 data points. Success rates are reported as mean ± standard deviation over 100 random partitions into two groups. A. Prediction of global future curve, quantified by the score (see Materials and Methods, Models predictions methods for its definition). B. Benefit of the method for prediction of the next day, using three data points (score ). C. Prediction improvement at various prediction depths, using the power law model (lung data) or the exponential-linear model (breast data). Due to lack of animals to be predicted for some of the random assignments, results of depths 2, 4, 6 and 9 for the breast data were not considered significant and were not reported (see Materials and Methods). * = p<0.05, ** = p<0.001, Student's t-test.

Birth/birth-death processes and their computable transition probabilities with biological applications

Birth-death processes track the size of a univariate population, but many biological systems involve interaction between populations, necessitating models for two or more populations simultaneously. A lack of efficient methods for evaluating finite-time transition probabilities of bivariate processes, however, has restricted statistical inference in these models. Researchers rely on computationally expensive methods such as matrix exponentiation or Monte Carlo approximation, restricting likelihood-based inference to small systems, or indirect methods such as approximate Bayesian computation. In this paper, we introduce the birth/birth-death process, a tractable bivariate extension of the birth-death process, where rates are allowed to be nonlinear. We develop an efficient algorithm to calculate its transition probabilities using a continued fraction representation of their Laplace transforms. Next, we identify several exemplary models arising in molecular epidemiology, macro-parasite evolution, and infectious disease modeling that fall within this class, and demonstrate advantages of our proposed method over existing approaches to inference in these models. Notably, the ubiquitous stochastic susceptible-infectious-removed (SIR) model falls within this class, and we emphasize that computable transition probabilities newly enable direct inference of parameters in the SIR model. We also propose a very fast method for approximating the transition probabilities under the SIR model via a novel branching process simplification, and compare it to the continued fraction representation method with application to the 17th century plague in Eyam. Although the two methods produce similar maximum a posteriori estimates, the branching process approximation fails to capture the correlation structure in the joint posterior distribution.

This is a preview of subscription content, access via your institution.

Models in Biology Research

Scientists primarily use models in two ways. First and foremost, models are used to increase our understanding about the world through evidence-based testing. To evaluate the merits and limitations of a model, it must be challenged with empirical data. Models that are inconsistent with empirical evidence must be either revised or discarded. In this way, modeling is a metacognitive tool used in the hypothesis-testing approach of the scientific method (Platt, 1964). Second, scientists use models to communicate and explain their findings to others. This allows the broader scientific community to further challenge and revise the model. Furthermore, this dynamic quality of scientific models allows researchers to test, retest, and ultimately gain new understanding and insight.

Biologists use models in nearly every facet of scientific inquiry, research, and communication. Models are helpful tools for representing ideas and explanations and are used widely by scientists to help describe, understand, and predict processes occurring in the natural world. All models highlight certain salient features of a system while minimizing the roles of others (Starfield et al., 1990 Hoskinson et al., 2014). By nature of their utility, models can take many forms based on how they are created, used, or communicated. After reflecting on the types of models we use in our daily work as biological researchers, we have identified three main categories of models used regularly in scientific practice: concrete, conceptual, and mathematical (Figure 1).

Scientific models may be concrete (physical representations in 2D or 3D), mathematical (expressed symbolically or graphically), or conceptual (communicated verbally, symbolically, or visually). Concrete models can be simplified representations of a system (a) or working-scale prototypes (b). Mathematical models can be descriptive or predictive, and empirical or mechanistic. A descriptive model, such as a regression line, depicts a pattern of association that is derived from empirical data (c), whereas a predictive model uses equations to represent a mechanistic understanding of a process (d) each can be expressed both symbolically and visually. Conceptual models focus on an understanding of how a process works and can be expressed as visual (e) or symbolic (f) representations as well as through verbal descriptions or analogies (g).

Scientific models may be concrete (physical representations in 2D or 3D), mathematical (expressed symbolically or graphically), or conceptual (communicated verbally, symbolically, or visually). Concrete models can be simplified representations of a system (a) or working-scale prototypes (b). Mathematical models can be descriptive or predictive, and empirical or mechanistic. A descriptive model, such as a regression line, depicts a pattern of association that is derived from empirical data (c), whereas a predictive model uses equations to represent a mechanistic understanding of a process (d) each can be expressed both symbolically and visually. Conceptual models focus on an understanding of how a process works and can be expressed as visual (e) or symbolic (f) representations as well as through verbal descriptions or analogies (g).

Development of scientific models of one type can prompt and inform models of other types. For example, Watson and Crick developed a physical model of DNA to help determine how different nucleotide bases can pair to produce a double-helix structure (Figure 1b), which in turn suggested a conceptual model for DNA replication (Watson & Crick, 1953). Jacques Monod's observation of a “double growth curve” of bacteria that deviated from the expected exponential growth model led to the development of a new, more accurate model of cellular regulation of gene expression (Figure 1e Jacob & Monod, 1961). Ecologists James Estes and John Palmisano developed conceptual models of population growth and decline among marine predator–prey species (Figure 1g) on the way to creating mathematical models of sea otter, sea urchin, and kelp dynamics along the Alaskan coast (Estes & Palmisano, 1974).


Sequencing of numerous genomes from all walks of life, including multiple representatives of diverse lineages of bacteria, archaea and eukaryotes, creates unprecedented opportunities for comparative-genomic studies [1-3]. One of the mainstream approaches of genomics is comparative analysis of protein or domain composition of predicted proteomes [2,4,5]. These studies often concentrate on domains rather than entire proteins because many proteins have variable multidomain architectures, particularly in complex eukaryotes (throughout this work, we use the term domain to designate a distinct evolutionary unit of proteins, which can occur either in the stand-alone form or as part of multidomain architectures often but not necessarily, such a unit corresponds to a structural domain). As soon as genome sequences of bacteria became available, it has been shown that a substantial fraction of the genome of each species, from approximately one third in bacteria with very small genomes, to a significant majority in species with larger genomes, consists of families of paralogs, genes that evolved via gene duplication at different stages of evolution [6-9]. Again, a comprehensive analysis of paralogous relationships between genes is probably best performed at the level of individual protein domains, first, because many proteins share only a subset of common domains, and second, because domains can be conveniently and with a reasonable accuracy detected using available collections of domain-specific sequence profiles [10-12]. Comparisons of domain repertoires revealed both substantial similarities between different species, particularly with respect to the relative abundance of house-keeping domains, and major differences [4,5]. The most notable manifestation of such differences is lineage-specific expansion of protein/domain families, which probably points to unique adaptations [13,14]. Furthermore, it has been demonstrated that more complex organisms, e.g. vertebrates, have a greater variety of domains and, in general, more complex domain architectures of proteins than simpler life forms [1,2].

Lineage-specific expansions and gene loss events detected as the result of comparative analysis of the domain compositions of different proteomes have been examined mostly at a qualitative level, in terms of the underlying biological phenomena, such as adaptation associated with expansion or coordinated loss of functionally linked sets of genes [15]. A complementary approach involves quantitative comparative analysis of the frequency distributions of proteins or domains in different proteomes. Several studies pointed out that these distributions appeared to fit the power law: P(i) ≈ ci -γ where P(i) is the frequency of domain families including exactly i members, c is a normalization constant and γ is a parameter, which typically assumes values between 1 and 3 [16-19]. Obviously, in double-logarithmic coordinates, the plot of P as a function of i is a straight line with a negative slope. Power laws appear in numerous biological, physical and other contexts, which seem to be fundamentally different, e.g. distribution of the number of links between documents in the Internet, the population of towns or the number of species that become extinct within a year. The famous Pareto law in economics describing the distribution of people by their income and the Zipf law in linguistics describing the frequency distribution of words in texts belong in the same category [20-29]. Recent studies suggested that power laws apply to the distributions of a remarkably wide range of genome-associated quantities, including the number of transcripts per gene, the number of interactions per protein, the number of genes or pseudogenes in paralogous families and others [30].

Power law distributions are scale-free, i.e. the shape of the distribution remains the same regardless of scaling of the analyzed variable. In particular, scale-free behavior has been described for networks of diverse nature, e.g. the metabolic pathways of an organism or infectious contacts during an epidemic spread [20,25-27]. The principal pattern of network evolution that ensures the emergence of power distributions (and, accordingly, scale-free properties) is preferential attachment, whereby the probability of a node acquiring a new connection increases with the number of connections this node already has.

However, a recent thorough study suggested that many biological quantities claimed to follow power laws, in fact, are better described by the so-called generalized Pareto function: P(i) = c(i+a) -γ where a is an additional parameter [31]. Obviously, although at i >>a, a generalized Pareto distribution becomes indistinguishable from a power law, at small i, it deviates significantly, the magnitude of the deviation depending on the value of a. Furthermore, unlike power law distributions, generalized Pareto distributions do not show scale-free properties.

The importance of the analysis of frequency distributions of domains or proteins lies in the fact that distinct forms of such distributions can be linked to specific models of evolution. Therefore, by exploring the distributions, inferences potentially can be made on the mode and parameters of genome evolution. For this purpose, the connections between domain frequency distributions and evolutionary models need to be explored theoretically within a maximally general class of models. In this work, we undertake such a mathematical analysis using simple models of evolution, which include duplication (birth), elimination (death) and de novo emergence (innovation) of domains as elementary processes (hereinafter BDIM, birth- death- innovation models). All asymptotics of equilibrium frequencies of domain families of different size possible for BDIM are identified and their dependence on the parameters of the model is investigated. In particular, analytical conditions on birth and death rates that produce power asymptotics are determined. We prove that the power law asymptotics appears if, and only if, the model is balanced, i.e. domain duplication and deletion rates are asymptotically equal up to the second order, and that any power asymptotic with the degree not equal to -1 can appear only if the assumption of independence of the duplication/deletion rates on the size of a domain family is rejected. We apply the developed formalism to the analysis of the frequency distributions of domains in individual prokaryotic and eukaryotic genomes and show a good fit of these data to a particular version of the model, the second-order balanced linear BDIM.

Theoretical ecologists have long sought to understand how the persistence of populations depends on biotic and abiotic factors. Classical work showed that demographic stochasticity causes the mean time to extinction to increase exponentially with population size, whereas variation in environmental conditions can lead to a power-law scaling. Recent work has focused especially on the influence of the autocorrelation structure (‘color’) of environmental noise. In theoretical physics, there is a burst of research activity in analyzing large fluctuations in stochastic population dynamics. This research provides powerful tools for determining extinction times and characterizing the pathway to extinction. It yields, therefore, sharp insights into extinction processes and has great potential for further applications in theoretical biology.

We use cookies to help provide and enhance our service and tailor content and ads. By continuing you agree to the use of cookies .

Human mini-brain models

Engineered human mini-brains, made possible by knowledge from the convergence of precision microengineering and cell biology, permit systematic studies of complex neurological processes and of pathogenesis beyond what can be done with animal models. By culturing human brain cells with physiological microenvironmental cues, human mini-brain models reconstitute the arrangement of structural tissues and some of the complex biological functions of the human brain. In this Review, we highlight the most significant developments that have led to microphysiological human mini-brain models. We introduce the history of mini-brain development, review methods for creating mini-brain models in static conditions, and discuss relevant state-of-the-art dynamic cell-culture systems. We also review human mini-brain models that reconstruct aspects of major neurological disorders under static or dynamic conditions. Engineered human mini-brains will contribute to advancing the study of the physiology and aetiology of neurological disorders, and to the development of personalized medicines for them.


A mathematical model is a logical machine for converting assumptions into conclusions. If the model is correct and we believe its assumptions then we must, as a matter of logic, believe its conclusions. This logical guarantee allows a modeler, in principle, to navigate with confidence far from the assumptions, perhaps much further than intuition might allow, no matter how insightful, and reach surprising conclusions. But, and this is the essential point, the certainty is always relative to the assumptions. Do we believe our assumptions? We believe fundamental physics on which biology rests. We can deduce many things from physics but not, alas, the existence of physicists. This leaves us, at least in the molecular realm, in the hands of phenomenology and informed guesswork. There is nothing wrong with that but we should not fool ourselves that our models are objective and predictive, in the sense of fundamental physics. They are, in James Black’s resonant phrase, ‘accurate descriptions of our pathetic thinking’.

Mathematical models are a tool, which some biologists have used to great effect. My distinguished Harvard colleague, Edward Wilson, has tried to reassure the mathematically phobic that they can still do good science without mathematics [65]. Absolutely, but why not use it when you can? Biology is complicated enough that we surely need every tool at our disposal. For those so minded, the perspective developed here suggests the following guidelines:

Ask a question. Building models for the sake of doing so might keep mathematicians happy but it is a poor way to do biology. Asking a question guides the choice of assumptions and the flavor of model and provides a criterion by which success can be judged.

Keep it simple. Including all the biochemical details may reassure biologists but it is a poor way to model. Keep the complexity of the assumptions in line with the experimental context and try to find the right abstractions.

If the model cannot be falsified, it is not telling you anything. Fitting is the bane of modeling. It deludes us into believing that we have predicted what we have fitted when all we have done is to select the model so that it fits. So, do not fit what you want to explain stick the model’s neck out after it is fitted and try to falsify it.

In later life, Charles Darwin looked back on his early repugnance for mathematics, the fault of a teacher who was ‘a very dull man’, and said, ‘I have deeply regretted that I did not proceed far enough at least to understand something of the great leading principles of mathematics for men thus endowed seem to have an extra sense’ [66]. One of those people with an extra sense was an Augustinian friar, toiling in the provincial obscurity of Austro-Hungarian Brünn, teaching physics in the local school while laying the foundations for rescuing Darwin’s theory from oblivion [67], a task later accomplished, in the hands of J. B. S. Haldane, R. A. Fisher and Sewall Wright, largely by mathematics. Darwin and Mendel represent the qualitative and quantitative traditions in biology. It is a historical tragedy that they never came together in their lifetimes. If we are going to make sense of systems biology, we shall have to do a lot better.

19.2 Population Growth and Regulation

Population ecologists make use of a variety of methods to model population dynamics. An accurate model should be able to describe the changes occurring in a population and predict future changes.

Population Growth

The two simplest models of population growth use deterministic equations (equations that do not account for random events) to describe the rate of change in the size of a population over time. The first of these models, exponential growth, describes theoretical populations that increase in numbers without any limits to their growth. The second model, logistic growth, introduces limits to reproductive growth that become more intense as the population size increases. Neither model adequately describes natural populations, but they provide points of comparison.

Exponential Growth

Charles Darwin, in his theory of natural selection, was greatly influenced by the English clergyman Thomas Malthus. Malthus published a book in 1798 stating that populations with unlimited natural resources grow very rapidly, which represents an exponential growth , and then population growth decreases as resources become depleted, indicating a logistic growth.

The best example of exponential growth in organisms is seen in bacteria. Bacteria are prokaryotes that reproduce largely by binary fission. This division takes about an hour for many bacterial species. If 1000 bacteria are placed in a large flask with an abundant supply of nutrients (so the nutrients will not become quickly depleted), the number of bacteria will have doubled from 1000 to 2000 after just an hour. In another hour, each of the 2000 bacteria will divide, producing 4000 bacteria. After the third hour, there should be 8000 bacteria in the flask. The important concept of exponential growth is that the growth rate—the number of organisms added in each reproductive generation—is itself increasing that is, the population size is increasing at a greater and greater rate. After 24 of these cycles, the population would have increased from 1000 to more than 16 billion bacteria. When the population size, N, is plotted over time, a J-shaped growth curve is produced (Figure 19.5a).

The bacteria-in-a-flask example is not truly representative of the real world where resources are usually limited. However, when a species is introduced into a new habitat that it finds suitable, it may show exponential growth for a while. In the case of the bacteria in the flask, some bacteria will die during the experiment and thus not reproduce therefore, the growth rate is lowered from a maximal rate in which there is no mortality. The growth rate of a population is largely determined by subtracting the death rate , D, (number organisms that die during an interval) from the birth rate , B, (number organisms that are born during an interval). The growth rate can be expressed in a simple equation that combines the birth and death rates into a single factor: r. This is shown in the following formula:

The value of r can be positive, meaning the population is increasing in size (the rate of change is positive) or negative, meaning the population is decreasing in size or zero, in which case the population size is unchanging, a condition known as zero population growth .

Logistic Growth

Extended exponential growth is possible only when infinite natural resources are available this is not the case in the real world. Charles Darwin recognized this fact in his description of the “struggle for existence,” which states that individuals will compete (with members of their own or other species) for limited resources. The successful ones are more likely to survive and pass on the traits that made them successful to the next generation at a greater rate (natural selection). To model the reality of limited resources, population ecologists developed the logistic growth model.

Carrying Capacity and the Logistic Model

In the real world, with its limited resources, exponential growth cannot continue indefinitely. Exponential growth may occur in environments where there are few individuals and plentiful resources, but when the number of individuals gets large enough, resources will be depleted and the growth rate will slow down. Eventually, the growth rate will plateau or level off (Figure 19.5b). This population size, which is determined by the maximum population size that a particular environment can sustain, is called the carrying capacity , or K. In real populations, a growing population often overshoots its carrying capacity, and the death rate increases beyond the birth rate causing the population size to decline back to the carrying capacity or below it. Most populations usually fluctuate around the carrying capacity in an undulating fashion rather than existing right at it.

The formula used to calculate logistic growth adds the carrying capacity as a moderating force in the growth rate. The expression “KN” is equal to the number of individuals that may be added to a population at a given time, and “KN” divided by “K” is the fraction of the carrying capacity available for further growth. Thus, the exponential growth model is restricted by this factor to generate the logistic growth equation:

Notice that when N is almost zero the quantity in brackets is almost equal to 1 (or K/K) and growth is close to exponential. When the population size is equal to the carrying capacity, or N = K, the quantity in brackets is equal to zero and growth is equal to zero. A graph of this equation (logistic growth) yields the S-shaped curve (Figure 19.5b). It is a more realistic model of population growth than exponential growth. There are three different sections to an S-shaped curve. Initially, growth is exponential because there are few individuals and ample resources available. Then, as resources begin to become limited, the growth rate decreases. Finally, the growth rate levels off at the carrying capacity of the environment, with little change in population number over time.

Role of Intraspecific Competition

The logistic model assumes that every individual within a population will have equal access to resources and, thus, an equal chance for survival. For plants, the amount of water, sunlight, nutrients, and space to grow are the important resources, whereas in animals, important resources include food, water, shelter, nesting space, and mates.

In the real world, phenotypic variation among individuals within a population means that some individuals will be better adapted to their environment than others. The resulting competition for resources among population members of the same species is termed intraspecific competition . Intraspecific competition may not affect populations that are well below their carrying capacity, as resources are plentiful and all individuals can obtain what they need. However, as population size increases, this competition intensifies. In addition, the accumulation of waste products can reduce carrying capacity in an environment.

Examples of Logistic Growth

Yeast, a microscopic fungus used to make bread and alcoholic beverages, exhibits the classical S-shaped curve when grown in a test tube (Figure 19.6a). Its growth levels off as the population depletes the nutrients that are necessary for its growth. In the real world, however, there are variations to this idealized curve. Examples in wild populations include sheep and harbor seals (Figure 19.6b). In both examples, the population size exceeds the carrying capacity for short periods of time and then falls below the carrying capacity afterwards. This fluctuation in population size continues to occur as the population oscillates around its carrying capacity. Still, even with this oscillation, the logistic model is confirmed.

Visual Connection

If the major food source of seals declines due to pollution or overfishing, which of the following would likely occur?

  1. The carrying capacity of seals would decrease, as would the seal population.
  2. The carrying capacity of seals would decrease, but the seal population would remain the same.
  3. The number of seal deaths would increase, but the number of births would also increase, so the population size would remain the same.
  4. The carrying capacity of seals would remain the same, but the population of seals would decrease.

Population Dynamics and Regulation

The logistic model of population growth, while valid in many natural populations and a useful model, is a simplification of real-world population dynamics. Implicit in the model is that the carrying capacity of the environment does not change, which is not the case. The carrying capacity varies annually. For example, some summers are hot and dry whereas others are cold and wet in many areas, the carrying capacity during the winter is much lower than it is during the summer. Also, natural events such as earthquakes, volcanoes, and fires can alter an environment and hence its carrying capacity. Additionally, populations do not usually exist in isolation. They share the environment with other species, competing with them for the same resources (interspecific competition). These factors are also important to understanding how a specific population will grow.

Population growth is regulated in a variety of ways. These are grouped into density-dependent factors, in which the density of the population affects growth rate and mortality, and density-independent factors, which cause mortality in a population regardless of population density. Wildlife biologists, in particular, want to understand both types because this helps them manage populations and prevent extinction or overpopulation.

Density-dependent Regulation

Most density-dependent factors are biological in nature and include predation, inter- and intraspecific competition, and parasites. Usually, the denser a population is, the greater its mortality rate. For example, during intra- and interspecific competition, the reproductive rates of the species will usually be lower, reducing their populations’ rate of growth. In addition, low prey density increases the mortality of its predator because it has more difficulty locating its food source. Also, when the population is denser, diseases spread more rapidly among the members of the population, which affect the mortality rate.

Density dependent regulation was studied in a natural experiment with wild donkey populations on two sites in Australia. 2 On one site the population was reduced by a population control program the population on the other site received no interference. The high-density plot was twice as dense as the low-density plot. From 1986 to 1987 the high-density plot saw no change in donkey density, while the low-density plot saw an increase in donkey density. The difference in the growth rates of the two populations was caused by mortality, not by a difference in birth rates. The researchers found that numbers of offspring birthed by each mother was unaffected by density. Growth rates in the two populations were different mostly because of juvenile mortality caused by the mother’s malnutrition due to scarce high-quality food in the dense population. Figure 19.7 shows the difference in age-specific mortalities in the two populations.

Density-independent Regulation and Interaction with Density-dependent Factors

Many factors that are typically physical in nature cause mortality of a population regardless of its density. These factors include weather, natural disasters, and pollution. An individual deer will be killed in a forest fire regardless of how many deer happen to be in that area. Its chances of survival are the same whether the population density is high or low. The same holds true for cold winter weather.

In real-life situations, population regulation is very complicated and density-dependent and independent factors can interact. A dense population that suffers mortality from a density-independent cause will be able to recover differently than a sparse population. For example, a population of deer affected by a harsh winter will recover faster if there are more deer remaining to reproduce.

Evolution Connection

Why Did the Woolly Mammoth Go Extinct?

Woolly mammoths began to go extinct about 10,000 years ago, soon after paleontologists believe humans able to hunt them began to colonize North America and northern Eurasia (Figure 19.8). A mammoth population survived on Wrangel Island, in the East Siberian Sea, and was isolated from human contact until as recently as 1700 BC. We know a lot about these animals from carcasses found frozen in the ice of Siberia and other northern regions.

It is commonly thought that climate change and human hunting led to their extinction. A 2008 study estimated that climate change reduced the mammoth’s range from 3,000,000 square miles 42,000 years ago to 310,000 square miles 6,000 years ago. 3 Through archaeological evidence of kill sites, it is also well documented that humans hunted these animals. A 2012 study concluded that no single factor was exclusively responsible for the extinction of these magnificent creatures. 4 In addition to climate change and reduction of habitat, scientists demonstrated another important factor in the mammoth’s extinction was the migration of human hunters across the Bering Strait to North America during the last ice age 20,000 years ago.

The maintenance of stable populations was and is very complex, with many interacting factors determining the outcome. It is important to remember that humans are also part of nature. Once we contributed to a species’ decline using primitive hunting technology only.

Demographic-Based Population Models

Population ecologists have hypothesized that suites of characteristics may evolve in species that lead to particular adaptations to their environments. These adaptations impact the kind of population growth their species experience. Life history characteristics such as birth rates, age at first reproduction, the numbers of offspring, and even death rates evolve just like anatomy or behavior, leading to adaptations that affect population growth. Population ecologists have described a continuum of life-history “strategies” with K-selected species on one end and r-selected species on the other. K-selected species are adapted to stable, predictable environments. Populations of K-selected species tend to exist close to their carrying capacity. These species tend to have larger, but fewer, offspring and contribute large amounts of resources to each offspring. Elephants would be an example of a K-selected species. r-selected species are adapted to unstable and unpredictable environments. They have large numbers of small offspring. Animals that are r-selected do not provide a lot of resources or parental care to offspring, and the offspring are relatively self-sufficient at birth. Examples of r-selected species are marine invertebrates such as jellyfish and plants such as the dandelion. The two extreme strategies are at two ends of a continuum on which real species life histories will exist. In addition, life history strategies do not need to evolve as suites, but can evolve independently of each other, so each species may have some characteristics that trend toward one extreme or the other.

In Vitro Tumor Models: Advantages, Disadvantages, Variables, and Selecting the Right Platform

In vitro tumor models have provided important tools for cancer research and serve as low-cost screening platforms for drug therapies however, cancer recurrence remains largely unchecked due to metastasis, which is the cause of the majority of cancer-related deaths. The need for an improved understanding of the progression and treatment of cancer has pushed for increased accuracy and physiological relevance of in vitro tumor models. As a result, in vitro tumor models have concurrently increased in complexity and their output parameters further diversified, since these models have progressed beyond simple proliferation, invasion, and cytotoxicity screens and have begun recapitulating critical steps in the metastatic cascade, such as intravasation, extravasation, angiogenesis, matrix remodeling, and tumor cell dormancy. Advances in tumor cell biology, 3D cell culture, tissue engineering, biomaterials, microfabrication, and microfluidics have enabled rapid development of new in vitro tumor models that often incorporate multiple cell types, extracellular matrix materials, and spatial and temporal introduction of soluble factors. Other innovations include the incorporation of perfusable microvessels to simulate the tumor vasculature and model intravasation and extravasation. The drive toward precision medicine has increased interest in adapting in vitro tumor models for patient-specific therapies, clinical management, and assessment of metastatic potential. Here, we review the wide range of current in vitro tumor models and summarize their advantages, disadvantages, and suitability in modeling specific aspects of the metastatic cascade and drug treatment.

Keywords: metastasis microvessel models spheroids transwell assay tumor models.

Watch the video: Overview of the Prabhupada book Beyond birth and death and a doubt solving session (May 2022).


  1. Elija

    It is remarkable, it is the amusing answer

  2. Bjorn

    Now everything is clear, thanks for the help in this matter.

  3. Shakazil

    Lovely question

  4. Blanford

    And so it happens too :)

  5. Negasi

    I hope because of the quality I will catch the meaning!

  6. Truesdale

    Noteworthy, it's the precious phrase

Write a message