Information

All or nothing phenomena

All or nothing phenomena


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

We know that an action potential is produced by an active cell membrane when the stimulus reaches a certain threshold. When it does, an action potential fires, and when it doesn't, nothing happens.

There are dozens of videos online painstakingly illustrating this point, using examples going from plumbing to pebbles falling down a paper towel.

But this is such a simple idea. There doesn't need to be analogies. When I push a button, something moves, when I don't push it, nothing moves.

But what is the fundamental principle behind this? What biochemical device causes this all or nothing phenomena?


The fundamental principle behind this is a "sensor" or the excitation-contraction coupling, regardless where you are. Let's consider the striated muscle and the cardiac muscle.

The function of excitation-contraction coupling in skeletal muscle is as voltage sensor to tell the SR, "We have got an action potential; release Ca2+ for contraction". This is a contrast to a heart muscle which works like a Ca2+ channel (i.e. Ca2+ is flowing in cardiac muscle continuously). Similarly, xcitation-contraction coupling in cardiac muscle translates the action potential into the production of tension. There are individual steps in this process of cardiac muscle:

    1. The cardiac action potential is initiated in the myocardial cell membrane, and the depolarization spreads to the interior of the cell via the T tubules. Recall that a unique feature of the cardiac action potential is its plateau (phase 2), which results from an increase in Ca and an inward Ca2+ current in which Ca2+ flows through L-type Ca2+ channels (dihydropyridine receptors) from extracellular fluid (ECF) to intracellular fluid (ICF).
    1. Entry of Ca2+ into the myocardial cell produces an increase in intracellular Ca2+ concentration. This increase in intracellular Ca2+ concentration is not sufficient alone to initiate contraction, but it triggers the release of more Ca2+ from stores in the sarcoplasmic reticulum through Ca2+ release channels (ryanodine receptors). This process is called Ca2+- induced Ca2+ release, and the Ca2+ that enters during the plateau of the action potential is called the trigger Ca2+. Two factors determine how much Ca2+ is released from the sarcoplasmic reticulum in this step: the amount of Ca2+ previously stored and the size of the inward Ca2+ current during the plateau of the action potential.
    1. and 4. Ca2+ release from the sarcoplasmic reticulum causes the intracellular Ca2+ concentration to increase even further. Ca2+ now binds to troponin C, tropomyosin is moved out of the way, and the interaction of actin and myosin can occur. Actin and myosin bind, cross-bridges form and then break, the thin and thick filaments move past each other, and tension is produced. Cross-bridge cycling continues as long as intracellular Ca2+ concentration is high enough to occupy the Ca2+-binding sites on troponin C.
    1. A critically important concept is that the magnitude of the tension developed by myocardial cells is proportional to the intracellular Ca2+ concentration. Therefore, it is reasonable that hormones, neurotransmitters, and drugs that alter the inward Ca2+ current during the action potential plateau or that alter sarcoplasmic reticulum Ca2+ stores would be expected to change the amount of tension produced by myocardial cells.

I like this figure about the topic in summarising stimuli in the smooth muscle:

In heart

In the gastrointestinal tract, we have then again special cells, Cajal cells, that work as "pacemakers" of the GI tract. Myenteric Interstitial cells of Cajal [ICC-MY] serve as a pacemaker which creates the bioelectrical slow wave potential that leads to contraction of the smooth muscle. This activity differs along the length of the system

  • 3 per minute in the stomach
  • 11-12 per minute in the duodenum
  • 9-10 per minute in the ileum
  • 3-4 per minute in the colon

You can see them as to work analogously through the "all-or-none" principle.

Sources

  • Costanzo, Physiology
  • Pocock, Physiology
  • My notes during during years 2011-2014 (in three Physiology courses and research)

Stage theory

Stage theories are based on the idea that elements in systems move through a pattern of distinct stages over time and that these stages can be described based on their distinguishing characteristics. Specifically, stages in cognitive development have a constant order of succession, later stages integrate the achievements of earlier stages, and each is characterized by a particular type of structure of mental processes which is specific to it. The time of appearance may vary to a certain extent depending upon environmental conditions. [1]

"Stage theory" can also be applied beyond psychology to describe phenomena more generally where multiple phases lead to an outcome. The term "stage theory" can thus be applied to various scientific, sociological and business disciplines. In these contexts, stages may not be as rigidly defined, and it is possible for individuals within the multi-stage process to revert to earlier stages or skip some stages entirely.


Ten myths about decision-making capacity

As a matter of practical reality, what role patients will play in decisions about their health care is determined by whether their clinicians judge them to have decision-making capacity. Because so much hinges on assessments of capacity, clinicians who work with patients have an ethical obligation to understand this concept. This article, based on a report prepared by the National Ethics Committee (NEC) of the Veterans Health Administration (VHA), seeks to provide clinicians with practical information about decision-making capacity and how it is assessed. A study of clinicians and ethics committee chairs carried out under the auspices of the NEC identified the following 10 common myths clinicians hold about decision-making capacity: (1) decision-making capacity and competency are the same (2) lack of decision-making capacity can be presumed when patients go against medical advice (3) there is no need to assess decision-making capacity unless patients go against medical advice (4) decision-making capacity is an "all or nothing" phenomenon (5) cognitive impairment equals lack of decision-making capacity (6) lack of decision-making capacity is a permanent condition (7) patients who have not been given relevant and consistent information about their treatment lack decision-making capacity (8) all patients with certain psychiatric disorders lack decision-making capacity (9) patients who are involuntarily committed lack decision-making capacity and (10) only mental health experts can assess decision-making capacity. By describing and debunking these common misconceptions, this article attempts to prevent potential errors in the clinical assessment of decision-making capacity, thereby supporting patients' right to make choices about their own health care.


Ten myths about decision-making capacity

As a matter of practical reality, what role patients will play in decisions about their health care is determined by whether their clinicians judge them to have decision-making capacity. Because so much hinges on assessments of capacity, clinicians who work with patients have an ethical obligation to understand this concept. This article, based on a report prepared by the National Ethics Committee (NEC) of the Veterans Health Administration (VHA), seeks to provide clinicians with practical information about decision-making capacity and how it is assessed. A study of clinicians and ethics committee chairs carried out under the auspices of the NEC identified the following 10 common myths clinicians hold about decision-making capacity: (1) decision-making capacity and competency are the same (2) lack of decision-making capacity can be presumed when patients go against medical advice (3) there is no need to assess decision-making capacity unless patients go against medical advice (4) decision-making capacity is an "all or nothing" phenomenon (5) cognitive impairment equals lack of decision-making capacity (6) lack of decision-making capacity is a permanent condition (7) patients who have not been given relevant and consistent information about their treatment lack decision-making capacity (8) all patients with certain psychiatric disorders lack decision-making capacity (9) patients who are involuntarily committed lack decision-making capacity and (10) only mental health experts can assess decision-making capacity. By describing and debunking these common misconceptions, this article attempts to prevent potential errors in the clinical assessment of decision-making capacity, thereby supporting patients' right to make choices about their own health care.


Contents

Protein complex formation sometimes serves to activate or inhibit one or more of the complex members and in this way, protein complex formation can be similar to phosphorylation. Individual proteins can participate in the formation of a variety of different protein complexes. Different complexes perform different functions, and the same complex can perform very different functions that depend on a variety of factors. Some of these factors are:

  • Which cellular compartment the complex exists in when it is contained
  • Which stage in the cell cycle the complexes are present
  • The nutritional status of the cell [citation needed]

Many protein complexes are well understood, particularly in the model organism Saccharomyces cerevisiae (a strain of yeast). For this relatively simple organism, the study of protein complexes is now being performed genome wide and the elucidation of most protein complexes of the yeast is ongoing. [ citation needed ]

Obligate vs non-obligate protein complex Edit

If a protein can form a stable well-folded structure on its own (without any other associated protein) in vivo, then the complexes formed by such proteins are termed "non-obligate protein complexes". However, some proteins can't be found to create a stable well-folded structure alone, but can be found as a part of a protein complex which stabilizes the constituent proteins. Such protein complexes are called "obligate protein complexes". [4]

Transient vs permanent/stable protein complex Edit

Transient protein complexes form and break down transiently in vivo, whereas permanent complexes have a relatively long half-life. Typically, the obligate interactions (protein–protein interactions in an obligate complex) are permanent, whereas non-obligate interactions have been found to be either permanent or transient. [4] Note that there is no clear distinction between obligate and non-obligate interaction, rather there exist a continuum between them which depends on various conditions e.g. pH, protein concentration etc. [5] However, there are important distinctions between the properties of transient and permanent/stable interactions: stable interactions are highly conserved but transient interactions are far less conserved, interacting proteins on the two sides of a stable interaction have more tendency of being co-expressed than those of a transient interaction (in fact, co-expression probability between two transiently interacting proteins is not higher than two random proteins), and transient interactions are much less co-localized than stable interactions. [6] Though, transient by nature, transient interactions are very important for cell biology: the human interactome is enriched in such interactions, these interactions are the dominating players of gene regulation and signal transduction, and proteins with intrinsically disordered regions (IDR: regions in protein that show dynamic inter-converting structures in the native state) are found to be enriched in transient regulatory and signaling interactions. [4]

Fuzzy complex Edit

Fuzzy protein complexes have more than one structural form or dynamic structural disorder in the bound state. [7] This means that proteins may not fold completely in either transient or permanent complexes. Consequently, specific complexes can have ambiguous interactions, which vary according to the environmental signals. Hence different ensemble of structures result in different (even opposite) biological functions. [8] Post-translational modifications, protein interactions or alternative splicing modulate the conformational ensembles of fuzzy complexes, to fine-tune affinity or specificity of interactions. These mechanisms are often used for regulation within the eukaryotic transcription machinery. [9]

Although some early studies [11] suggested a strong correlation between essentiality and protein interaction degree (the “centrality-lethality” rule) subsequent analyses have shown that this correlation is weak for binary or transient interactions (e.g., yeast two-hybrid). [12] [13] However, the correlation is robust for networks of stable co-complex interactions. In fact, a disproportionate number of essential genes belong to protein complexes. [14] This led to the conclusion that essentiality is a property of molecular machines (i.e. complexes) rather than individual components. [14] Wang et al. (2009) noted that larger protein complexes are more likely to be essential, explaining why essential genes are more likely to have high co-complex interaction degree. [15] Ryan et al. (2013) referred to the observation that entire complexes appear essential as "modular essentiality". [10] These authors also showed that complexes tend to be composed of either essential or non-essential proteins rather than showing a random distribution (see Figure). However, this not an all or nothing phenomenon: only about 26% (105/401) of yeast complexes consist of solely essential or solely nonessential subunits. [10]

In humans, genes whose protein products belong to the same complex are more likely to result in the same disease phenotype. [16] [17] [18]

The subunits of a multimeric protein may be identical as in a homomultimeric (homooligomeric) protein or different as in a heteromultimeric protein. Many soluble and membrane proteins form homomultimeric complexes in a cell, majority of proteins in the Protein Data Bank are homomultimeric. [19] Homooligomers are responsible for the diversity and specificity of many pathways, may mediate and regulate gene expression, activity of enzymes, ion channels, receptors, and cell adhesion processes.

The voltage-gated potassium channels in the plasma membrane of a neuron are heteromultimeric proteins composed of four of forty known alpha subunits. Subunits must be of the same subfamily to form the multimeric protein channel. The tertiary structure of the channel allows ions to flow through the hydrophobic plasma membrane. Connexons are an example of a homomultimeric protein composed of six identical connexins. A cluster of connexons forms the gap-junction in two neurons that transmit signals through an electrical synapse.

Intragenic complementation Edit

When multiple copies of a polypeptide encoded by a gene form a complex, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation (also called inter-allelic complementation). Intragenic complementation has been demonstrated in many different genes in a variety of organisms including the fungi Neurospora crassa, Saccharomyces cerevisiae and Schizosaccharomyces pombe the bacterium Salmonella typhimurium the virus bacteriophage T4, [20] an RNA virus [21] and humans. [22] In such studies, numerous mutations defective in the same gene were often isolated and mapped in a linear order on the basis of recombination frequencies to form a genetic map of the gene. Separately, the mutants were tested in pairwise combinations to measure complementation. An analysis of the results from such studies led to the conclusion that intragenic complementation, in general, arises from the interaction of differently defective polypeptide monomers to form a multimer. [23] Genes that encode multimer-forming polypeptides appear to be common. One interpretation of the data is that polypeptide monomers are often aligned in the multimer in such a way that mutant polypeptides defective at nearby sites in the genetic map tend to form a mixed multimer that functions poorly, whereas mutant polypeptides defective at distant sites tend to form a mixed multimer that functions more effectively. The intermolecular forces likely responsible for self-recognition and multimer formation were discussed by Jehle. [24]

The molecular structure of protein complexes can be determined by experimental techniques such as X-ray crystallography, Single particle analysis or nuclear magnetic resonance. Increasingly the theoretical option of protein–protein docking is also becoming available. One method that is commonly used for identifying the meomplexes [ clarification needed ] is immunoprecipitation. Recently, Raicu and coworkers developed a method to determine the quaternary structure of protein complexes in living cells. This method is based on the determination of pixel-level Förster resonance energy transfer (FRET) efficiency in conjunction with spectrally resolved two-photon microscope. The distribution of FRET efficiencies are simulated against different models to get the geometry and stoichiometry of the complexes. [25]

Proper assembly of multiprotein complexes is important, since misassembly can lead to disastrous consequences. [26] In order to study pathway assembly, researchers look at intermediate steps in the pathway. One such technique that allows one to do that is electrospray mass spectrometry, which can identify different intermediate states simultaneously. This has led to the discovery that most complexes follow an ordered assembly pathway. [27] In the cases where disordered assembly is possible, the change from an ordered to a disordered state leads to a transition from function to dysfunction of the complex, since disordered assembly leads to aggregation. [28]

The structure of proteins play a role in how the multiprotein complex assembles. The interfaces between proteins can be used to predict assembly pathways. [27] The intrinsic flexibility of proteins also plays a role: more flexible proteins allow for a greater surface area available for interaction. [29]

While assembly is a different process from disassembly, the two are reversible in both homomeric and heteromeric complexes. Thus, the overall process can be referred to as (dis)assembly.

Evolutionary significance of multiprotein complex assembly Edit

In homomultimeric complexes, the homomeric proteins assemble in a way that mimics evolution. That is, an intermediate in the assembly process is present in the complex’s evolutionary history. [30] The opposite phenomenon is observed in heteromultimeric complexes, where gene fusion occurs in a manner that preserves the original assembly pathway. [27]


The evolution of syntax

In their Essay [1], Martins and Boeckx (MB) take issue with our proposals about the evolution of language in our book Why Only Us (WOU) [2] and offer an alternative. As we will show, their critique is misguided and their alternative untenable. But first, it is useful to clearly delineate the areas of agreement and disagreement with them.

First, in [1], MB do not question our assumption that the core properties of language are based on the combinatorial operation Merge, with the standard definition of its basic operation: binary set formation [1,2]. Our disagreement has to do with evolution of this operation. More precisely, as we shall see, the disagreement has to do with steps they propose that immediately preceded the evolution of Merge.

Second, we both agree that it is important to determine how Merge is implemented in the brain. In [2] pp. 158–164, as illustrated in Fig 4.4–4.6, we advance a specific proposal about this neural “wiring,” grounded on recent explicit neurological and comparative primate findings [3–5]. MB do not challenge this proposal. We therefore put the matter of neural implementation aside here.

Third, we both agree that it is important to determine how a Merge-based system is used, that is, how it is externalized in the sensory-motor system (typically, though not necessarily, sound) and then actually used in performance, e.g., parsing or producing sentences. MB discuss the importance of this in their discussion of David Marr’s “three levels of analysis” but advance no specific proposals as to algorithms or physical implementation. In [2], we devote nearly an entire chapter to this topic, “Triangles in the Brain,” pp. 109–166, in which we discuss both these topics, including analysis of the general architecture for such an operation and the empirical evidence for various computer science–based algorithms grounding it, and how these might crucially affect language use, all with an eye towards Marr’s distinctions about the distinct levels of algorithm and implementation that [1] also stresses. (See, e.g., [2] pp. 131–132 and pp. 135–139, for discussion of the algorithms involved, e.g., a parallel version of the conventional Cocke–Kasami–Younger algorithm, p. 136 and p. 139, see [6,7], along with an explicit proposal about the use of content-addressable memory, as this sort of memory is often considered more congenial with known neuroscience [8].) Here, too, MB’s Essay [1] offers no criticism of these proposals and, in fact, does not even mention them. It advances no specific proposals of its own on these topics. We therefore also put these topics aside here.

Fourth, we agree that there need not be, as [1] notes in its abstract, a “parallelism between the formal complexity of the operation at the computational level and the number of evolutionary steps it must imply.” As MB formulate the central point of their paper [1] p. 5: “We find it problematic to rely on ‘logical necessity’ based on the formal complexity of a trait to motivate evolutionary scenarios. It is this fallacy that we draw attention to in this paper.” We too regard it as “problematic” and, indeed, a “fallacy.” The observation is correct. We never questioned this point in our book (see, e.g., [2], p. 79, p. 164). What is under discussion is not operations in general but rather a specific one, the simplest combinatorial operation, binary set formation, called Merge. Crucially, as we discuss next, MB’s own proposal adopts our account of the evolution of Merge unchanged, thus tacitly recognizing that binary set formation (Merge) cannot be decomposed and emerges in a single step. MB then add new proposals about immediate precursors to our shared account of the evolution of Merge. The justification for the added complexities that they propose about precursors to Merge is the sole point at issue.

Finally, we both agree that it would be important to discover the long evolutionary history that preceded the appearance of Merge. In this case, although both we and [1] agree that there were multiple steps that preceded the appearance of Merge, neither we nor [1] present any explicit proposals about these previous steps, so we can put this matter aside too. The sole issue, then, on which we do disagree has to do with the evolution of the operation Merge itself, including its subcases. More precisely, as we see directly, the sole issue concerns their proposal about precursors to our shared conception of the evolution of Merge.

In [2], we proposed that the elementary operation of binary set formation (Merge) appeared in a single step, yielding automatically its two subcases. The first subcase, external Merge (EM), occurs when the two items forming the binary set must be distinct from one another, as in [1], , yielding or , , yielding >. The second subcase is internal Merge (IM), in which one of the items forming the binary set must be a subset of the other, as in , and >, yielding >. Note that both EM and IM are more complex than Merge, plain binary set formation, because both EM and IM explicitly contain an added constraint on top of binary set formation, namely, the “must be distinct/a subset of” clauses explicitly shown previously. So, Merge is simpler than either EM or IM in a very clear computational sense.

In [1], MB propose a single explicit, different scenario: first EM appeared and then IM, each more complex than Merge, as we have just seen. But—crucially—these added complexities still do not yield Merge. That requires a further step in the evolutionary process, yielding Merge—of course, in a single step—incorporating without comment the two more complex subcases that had allegedly emerged earlier. Note, crucially, that their proposal implicitly assumes that Merge appeared in exactly the way we describe: in a single step, following the appearance of the two precursors they postulate. One might suggest that IM appeared first, then EM, and there are even more complex proposals as to immediate predecessors of the simplest operation, Merge, though we again stress that MB advance only one explicit multistep proposal. If they have other multistep proposals in mind, they do not present them in [1]. But evidently, reasons would be needed to entertain a proposal about the evolution of Merge that is more complex than the simplest one, namely, the one we put forth. The proposal in [1], furthermore, is thus a new proposal that in fact incorporates our proposal, unchanged. To put it simply, we are now considering a theory T′ (the one in [1]), which incorporates unchanged all the assumptions of a theory T (the one we proposed in [2]), differing from T only in that it adds a number of new complex assumptions. Plainly, reasons—in fact, strong reasons—are required to entertain T′ as a possibility.

MB offer possible reasons in their Essay, but they are based on misunderstandings of formal languages and the operation Merge. The proposal in [1] rests entirely on properties of the "Chomsky hierarchy" of rewriting rules that was developed in the 1950s ([1] p. 5, Table 1, and p. 6 labeled as “The Hierarchy of Formal Languages” see, e.g., [9]). The hierarchy adapts to language the logician E. L. Post’s general theory of computability, based on “rewriting systems”: rules that replace linear strings of symbols with new linear strings. For instance, one such rewriting rule is “VerbPhrase → Verb Determiner Noun” stating that the symbol “VerbPhrase” can be written as the linear string concatenation of three symbols, “Verb,” “Determiner,” and “Noun” in that order, ultimately—as in, e.g., “bought the book.” (Specifically, such rule systems constitute the Type 2 systems in Table 1 of [1].) As one can see, by definition, all of the “formal languages” generated at the various levels of this hierarchy involve linear order [9]. In contrast, again, by definition, Merge-based systems have (hierarchical) structure but no linear order, an essential property of binary sets formed by Merge. MB themselves illustrate this with their own example, > ([1], p. 1), [10], a hierarchical structure with no linear order. Accordingly, Merge-based systems do not even appear in the hierarchy, and anything concluded from the study of the Chomsky hierarchy is totally irrelevant to the evolution of Merge-based systems. That is the basic reason why the sole evolutionary proposal in [1] is not tenable. This is quite apart from the fact this proposal is superfluous, as we noted previously, because it simply amounts to an added complication about precursors to Merge that have no empirical justification.

We should add that there are strong empirical reasons why the core combinatorial operation for language should keep to hierarchical structures as in the aforementioned example, >, lacking linear order. In [2], we discuss the empirical evidence for this distinction, some of it tracing back to the 1970s, when the issues began to become clear (for one example, see [2], pp. 117–118, and Fig 4.2 as for biological evidence supporting this position, e.g., the limitations to restricted kinds of linear strings in birdsong but not human language, see [11,12]). The Essay in [1] ignores this topic completely. In fact, as noted previously, the Essay does not contest that the core properties of language are based on Merge, an operation that crucially provides only binary set hierarchically structured expressions without any linear order, radically different in principle from the systems [1] discusses in connection with “The Hierarchy of Formal Languages”.

That is enough to point out that the sole proposal in [1] about the evolution of language and its critique are mistaken. There are, however, some further errors in [1] that may be useful to disentangle.

Consider MB’s argument that EM might have emerged first, then IM, and finally, the simpler operation Merge that automatically incorporates both as subcases. The argument is based on the claim that EM accounts for nested dependencies (at a low level of the “Hierarchy of Formal Languages” Table 1, line 2), whereas IM emerged later to account for crossing dependencies (at a higher level of the hierarchy, [1] Table 1, line 3) see Fig 1 in [1]. Let us take this proposal apart step by step.

First, MB’s empirical claim linking the difference between EM and IM to “nested” and “overlapping” dependency types (see [1], Fig 1) is false. As is familiar from current introductory textbooks in linguistics (e.g., [13,14]), IM enters into generation of even the simplest nested dependencies that occur in Merge-based theories—for example, in the sentence, “where are you going,” as shown in [13], pp. 324, Fig 23, or [14] p. 357, which displays a hierarchical form with nested dependencies constructed purely by IM that we can write out in the equivalent set notation that MB [1] p. 1 adopt as <>2, <>1, <>1, 2>>>>>. (Here, we use the numeric subscripts on “where” and “are” purely for readability, to indicate the dependency links as in Fig 1 of [1].) Here, IM takes the two binary sets <>1, <>1, 2>>>> and <>2>, with the second set, <>2> clearly a subset of the first, <>1, <>1, 2>>>>>, and forms a result with nested dependencies, as indicated by the numeric subscripts, <>2, <>1, <>1, 2>>>>>.

In short, IM produces nested dependencies quite commonly, in the simplest, everyday sorts of sentences. In fact, according to widely adopted analyses appearing in standard introductory linguistics textbooks such as [15], IM operates even in simple subject–predicate sentences like “Several men are sick” ([15] example 92). According to the actual linguistic evidence, then, it doesn’t make sense to say that either EM or IM evolved first. Rather, just as we might expect if Merge evolved as a unitary operation, even simple sentences—on textbook accounts, most sentences—contain mixtures of both IM and EM deriving nested dependencies.

Second, MB claim that EM is computationally simpler than IM and therefore might reasonably be assumed to have evolved first. But this computational claim is also mistaken. Perhaps [1] had in mind some notion of parsing or recognition complexity, which again relates back to the Chomsky hierarchy and to the use of language rather than to the evolution of Merge, the topic under discussion. However, in terms of generative complexity, EM is more complex than IM. EM requires massive search to locate the elements to be merged. It requires search through the entire dictionary of the language, as well as through the set of syntactic objects already constructed, which can grow without limit as sentences become more complex. In contrast, IM requires search of only a single syntactic object. This is easily seen in the example we gave previously for “where are you going.” To apply IM to this example with <>1, <>1, 2>>>>, one needs to search at most only the expression <>1, <>1, 2>>>> to locate as a subset internal to the entire expression and so a possible input to IM. This is simpler than EM because it involves searching a single, shorter, preexisting list, in a bounded, deterministic way.

In sum, there seems to be no support for the position that EM emerged before IM. The underlying reason for this confusion may possibly trace back to a misunderstanding of the relevance of the “Chomsky hierarchy” for Merge-based systems. The Chomsky hierarchy is based on concatenations of strings, with inherent linear order [9]. However, Merge-based systems are grounded on binary order-free sets, not string concatenation [10,11]. One cannot conflate the two. Therefore, any conclusions that [1] draw from this false conflation are also flawed.

The errors in [1] concerning emergence of EM and IM are, however, secondary. The crucial point is that the sole proposal in [1] about evolution of language is untenable. The “no half-merge fallacy” analysis in [1] collapses because there is no such fallacy.


1.2 SUBDIVISIONS OF BIOELECTROMAGNETISM

1.2.1 Division on a Theoretical Basis

The discipline of bioelectromagnetism may be subdivided in many different ways. One such classification divides the field on theoretical grounds according to two universal principles: Maxwell's equations (the electromagnetic connection) and the principle of reciprocity. This philosophy is illustrated in Figure 1.2 and is discussed in greater detail below.

Maxwell's Equations

(A) Bioelectricity, (B) Bioelectromagnetism (biomagnetism), and (C) Biomagnetism.

Subdivision B has historically been called "biomagnetism" which unfortunately can be confused with our Subdivision C. Therefore, in this book, for Subdivision B we also use the conventional name "biomagnetism" but, where appropriate, we emphasize that the more precise term is "bioelectromagnetism." (The reader experienced in electromagnetic theory will note the omission of a logical fourth subdivision: measurement of the electric field induced by variation in the magnetic field arising from magnetic material in tissue. However, because this field is not easily detected and does not have any known value, we have omitted it from our discussion).

Reciprocity

(I) Measurement of an electric or a magnetic field from a bioelectric source or (the magnetic field from) magnetic material. (II) Electric stimulation with an electric or a magnetic field or the magnetization of materials (with magnetic field) (III) Measurement of the intrinsic electric or magnetic properties of tissue.

      Fig. 1.2. Organization of bioelectromagnetism into its subdivisions. It is first divided horizontally to:
      A) bioelectricity
      B) bioelectromagnetism (biomagnetism), and
      C) biomagnetism.
      Then the division is made vertically to:
      I) measurement of fields,
      II) stimulation and magnetization, and
      III) measurement of intrinsic electric and magnetic properties of tissue.
      The horizontal divisions are tied together by Maxwell's equations and the vertical divisions by the principle of reciprocity.

    Description of the Subdivisions

    (II) Electric stimulation with an electric or a magnetic field or the magnetization of materials includes the effects of applied electric and magnetic fields on tissue. In this subdivision of bioelectromagnetism, electric or magnetic energy is generated with an electronic device outside biological tissues. When this electric or magnetic energy is applied to excitable tissue in order to activate it, it is called electric stimulation or magnetic stimulation, respectively. When the magnetic energy is applied to tissue containing ferromagnetic material, the material is magnetized. (To be accurate, an insulated human body may also be charged to a high electric potential. This kind of experiment, called electrifying, were made already during the early development of bioelectricity but their value is only in the entertainment.) Similarly the nonlinear membrane properties may be defined with both subthreshold and transthreshold stimuli. Subthreshold electric or magnetic energy may also be applied for other therapeutic purposes, called electrotherapy or magnetotherapy. Examples of this second subdivision of bioelectromagnetism, also called electrobiology and magnetobiology, respectively, are shown in Table 1.2.

    (III) Measurement of the intrinsic electric or magnetic properties of tissue is included in bioelectromagnetism as a third subdivision. As in Subdivision II, electric or magnetic energy is generated by an electronic device outside the biological tissue and applied to it. However, when the strength of the energy is subthreshold, the passive (intrinsic) electric and magnetic properties of the tissue may be obtained by performing suitable measurements. Table 1.3 illustrates this subdivision:

    Lead Field Theoretical Approach

    1.2.2 Division on an Anatomical Basis

    a) neurophysiological bioelectromagnetism b) cardiologic bioelectromagnetism and c) bioelectromagnetism of other organs or tissues.

    1.2.3 Organization of this Book

    Because discussion of Subdivision C would require the introduction of additional fundamentals, we have chosen not to include it in this volume. As mentioned earlier, Subdivision C entails measurement of the magnetic field from magnetic material, magnetization of material, and measurement of magnetic susceptibility. The reader interested in these topics should consult Maniewski et al. (1988) and other sources. At the present time, interest in the Subdivision C topic is limited.


    Roosting hawks do not hear the silent flight of a large owl who takes one hawk and is gone before the other seems to notice.

    Sound may appear to be an independent phenomenon, but our perception of sound waves is affected by speed. Austrian physicist Christian Doppler discovered that when a moving object such as a siren produces sound waves, the waves bunch in front of the object and disperse behind it. This induced wave disturbance, known as the Doppler Effect, causes the sound of an approaching object to rise in pitch due to wavelength shortening. When the object passes, the trailing waves extend and are perceived lower in pitch. The Doppler Effect is also seen in the bunching of waves in front of a ship and the dispersing wake.


    Is tooth numbing an all-or-nothing phenomenon?

    I'm convinced the high speed drill is part of the reason the local anesthetic never seemed to fully numb my teeth. Is it possible to do a cavity. say one that isn't overly deep. with just the low speed drill?

    ImNotASpammerYouIdiots

    Junior member

    Is tooth numbing an all-or-nothing phenomenon or is it on a continuum?

    Because I have felt something at some pointfor every filling I've ever had done, and I don't mean pressure. And one of them was pretty deep, too. From what I've read, however, if I weren't numb, that one would have sent me through the roof with pain. But while uncomfortable in moments, it was bearable. So I must have been numb, I guess?

    So I have always wondered exactly what successful "numbing" is. Is it where you can still feel how it would hurt without the local anesthesia, but it's dulled out a lot? Is that what the goal is?

    That's also how the numbing gel for the injections has always worked for me. I can feel a dulled ache when the needle goes in, but it's not nearly as sharp as it would otherwise be.

    I just think with having different dentists doing different fillings and always getting the same result, it's more likely that I just have unrealistic expectations for how "painless" the procedure can be.

    But then people tell me that, based on my description of the sensations I occasionally felt during the drilling (it's a cold, boaring, bone aching kind of thing), I must not have been "completely numb," whatever that means.


    Materials and Methods

    Plasmids, cell lines, and preparation of viral stocks

    pcDNA3.1 expression vectors with wild-type hA3G (wt-A3G) or non-editing E259Q mutant hA3G (E259Q-hA3G), VSV-G and the vif-deficient HIV-1(IIIB) (pIIIB/Δvif) proviral construct have been described previously [21], [25], [71]–[73]. Vif-deficiency was caused by the introduction of two nonsense mutations while all other accessory genes were functional. pIIIB/Δvif was furthermore modified with a G-to-A mutation at position 571 of the 5′LTR U5 region, which copies to the 3′LTR during reverse transcription, enabling discrimination of viral sequences that have passed through a replication cycle from those derived from the residual transfection cocktail. VSV-G pseudotyped Δvif-HIV-1 was produced by transfection of subconfluent monolayers of 293T cells using polyethylenimine (PEI) (Polyscience). as in [25]. The transfection efficiency of PEI is reported to be over 98% [74] and the average number of transfected plasmids per cell using similar plasmid concentrations and cell numbers is about 10 5 plasmid molecules [75]. The pIIIB/Δvif construct, VSV-G, and varied ratios of wt-hA3G to E259Q-mutant hA3G were used (summarized in Table 1 ). Media were changed after 6 h and supernatants were harvested after 24 hr (hA3G titration experiment) or 48 hr (patient-derived Vif experiment) virus production was quantified by p24 Gag ELISA (Perkin Elmer), prior to storage at �ଌ and use in subsequent experiments.

    Immunoblot analysis

    For preparation of purified HIV-1 virion associated proteins, virus supernatant equivalent to 30 ng of p24 Gag was diluted in media, and underlain with 20% sucrose solution. Samples were centrifuged for 2 hours at 14000 rpm at 4C° and supernatants removed. Purified virions or infected 293T cells were lysed, centrifuged to remove cell debris, and prepared for loading onto SDS-PAGE gels in a 1𢍡𢍡 mix of 3× SDS-PAGE sample buffer (180 mM Tris, pH 6.8 9% (w/v) SDS 30% glycerol bromophenol blue), DTT (in PBS, giving a final concentration of 100 mM) and lysate, and were incubated for 10 minutes at 95ଌ. 5� µl of samples were loaded into a 4% stacking gel on a 12% separating gel and run for 1 hr at 25 mA/gel at maximum voltage. Proteins were transferred from gels onto PVDF membranes (pre-soaked in methanol and running buffer (WB: 0.1% Tween20 in PBS)) at 16 V overnight membranes were blocked in 5% milk powder in WB for at least 30 minutes, prior to incubation with primary antibody (either anti-hA3G (recognizing both wt- and E259Q-hA3G) or anti-p24 CA (loading control) diluted in 5% milk powder/WB) for 1 hr at room temperature. After rinsing 3 times and washing 4 times for 5′ with WB, membranes were incubated with horseradish-peroxidase conjugated secondary antibody (diluted in 5% milk powder/WB) for 40′ at room temperature, and the rinse/wash procedure was repeated. Membranes were then incubated for 1𠄵′ with ECL substrate before exposure to film as in [25].

    Viral infections, single-cycle infectivity, and DNA purification

    TZM-bl cells (a HeLa cell line expressing HIV-1 co-receptors and a lacZ reporter gene under control of an HIV-1 LTR promoter) were infected with 293T cell produced VSV-G-pseudotyped Δvif-HIV-1 virions containing various ratios of wt-hA3G to E259Q-mutant hA3G (hA3G titration experiment) or Δvif-HIV-1 virions containing hA3G and patient derived Vif (patient-derived Vif experiment). After 24 hrs, supernatants were removed and cells were washed with PBS, before lysing with 200 µl lysis solution. Following transfer to microfuge tubes, debris from cell lysates was pelleted by microcentrifugation at 14,000 rpm for 10 minutes and 20 µl cell extract was then added to 100 µl Galacton-Star (reporter gene assay system for mammalian cells) substrate (Applied Biosystems Inc., CA, USA) diluted 1� with reaction buffer diluent in white microplate wells. The light signal was measured every 10� minutes up to 2 hr after the start of the reaction on a luminometer, giving a read-out of β-galactosidase production, which is proportional to the infectivity of the infecting virus. For sequencing experiments, total DNA was extracted from infected cells using the DNeasy DNA extraction kit (Qiagen Inc, CA, USA) and digested with DpnI (New England Biolabs), a restriction endonuclease that specifically targets methylated DNA, to remove carried-over transfection mixture.

    Amplification of near-full length proviral genomes

    Near-full length proviral single genomes were amplified by limiting dilution nested PCR using Advantage 2 Polymerase mix (TakaraBio/Clontech, Paris, France) and HIV-1 specific oligonucleotide primers, as described previously [20]. The product of an 8.5 kb first-round PCR from gag-to-3′LTR was used as a template for a second-round PCR spanning env-to-3′LTR (2.1 kb, 8� fragments per hA3G transfection condition, 87 amplicons in total)(Figure S2). For a subset of sequences, gag-to-pol, pol-to-vif, and vif-to-env fragments were amplified to derive near-full length sequences. Where possible, primers (Table S2) were designed to exclude 5′GG or 5′GA (plus-strand) or 5� or 5′TC (minus-strand) motifs, the preferred contexts for hA3F and hA3G activity respectively, in order to reduce the potential for bias in amplification of hypermutated viruses. Amplicons were purified using the QIAquick PCR purification kit (Qiagen Incorporated, CA, USA) and both strands were sequenced directly using Dyedeoxy Terminator sequencing (Applied Biosystems, CA, USA) on an Applied Biosystems 3730xl DNA Analyzer as previously described [20]. DNA reads were assembled and proofread using the Pregap4 and Gap4 software within the Staden package [76] (Figure S2). Sequences lacking the engineered G-to-A mutation in the 3′LTR [72] were assumed to be carried-over transfection mixture and were discarded. Sequences were screened for evidence of hA3G-mediated editing/hypermutation (defined as a mutational process in which G-to-A transitions far exceed all other mutations [19]) using the HYPERMUT software (www.hiv.lanl.gov) [77].

    MLE of the number of virion-associated hA3G-units

    The proportion of sequences carrying evidence of hypermutation at each titration was used to generate a MLE of the average number of deaminating hA3G units incorporated into a progeny virion. Our analysis assumes (i) that there are a limited number of positions in a virion that can be occupied by hA3G-editing units [28]. The number of such positions, denoted k, is unknown, but we can use our titrations to obtain a maximum likelihood estimate of its value. Let us denote as ri, the proportion of the hA3G in transfection condition i that is wild-type editing (wt-hA3G), as opposed to non-editing (E259Q-hA3G) for example, from Table 1 and Table 2 , in condition 5, r 5 =𠂠.33. How likely is a virion to incorporate an editing hA3G-unit under this condition? To answer this question, we assume (ii) that the efficiency of transfection, protein expression, and virion incorporation is the same for editing and non-editing hA3G and (iii) that there is sufficient hA3G present in each titration for all k slots to be occupied. Under these three assumptions, the probability that a virion incorporates one or more editing hA3G-units is simply qi =𠂡−(1−ri) k . If we further assume (iv) that detectable hypermutation always ensues from the incorporation of one or more editing hA3G-units, then qi is also the probability that a sequence undergoes hypermutation. As such, given a sample of ni sequences, the probability that hi of them will be hypermutants is the binomial probability:. Because qi is a function of k we can now write the likelihood function of k as , and thereby obtain the value of k that is most likely to have given rise to our data, i.e., the value that maximises L(k). 95% confidence intervals on this estimate were obtained by assuming that twice the log likelihood ratio is χ 2 -distributed with 1 degree of freedom, and the sensitivity of the analysis to each individual condition was assessed by jackknifing, i.e., reestimating k after removing each condition in turn. Results of the analyses are shown in Figure 1E and Figure 2 , and the values of ri, ni, and hi are shown in Table 2 . The approximation for the probability that an observed hypermutated sequence has arisen from the incorporation of a single hA3G-editing unit is obtained from:

    Analyses of in vivo hypermutated HIV DNA

    The thirty-eight near-full length HIV genomes annotated as hypermutated in the Los Alamos HIV sequence database (www.hiv.lanl.gov) at the time of this analysis and one non-annotated hypermutated sequence ( <"type":"entrez-nucleotide","attrs":<"text":"EF036536","term_id":"117581816","term_text":"EF036536">> EF036536) were used to estimate levels of hypermutation in HIV DNA (Table S3). <"type":"entrez-nucleotide","attrs":<"text":"EF036536","term_id":"117581816","term_text":"EF036536">> EF036536 was identified by examining GenBank entries of 1725 near-full length HIV genomes. The sequences were tested by the search terms ‘stop’, ‘truncated’, ‘truncation’, ‘terminated’, ‘termination’, ‘mutated’, ‘mutation’, ‘hypermutated’, ‘hypermutation’, ‘non-functional’, and ‘nonfunctional’, and those carrying more than 4 stop codons were tested for evidence of hA3G-induced mutations as previously described [20] analyses of sequences with fewer mutations was not possible due to noise. G G-to- A G mutation rates were estimated for each of these 39 hypermutated sequences using reference sequences generated from closely related taxa identified by NJ phylogenetic tree analyses as described previously [20]. G G-to- A G mutation rates were corrected for probable non-hA3G-mediated mutation by subtracting the mean of the G C-to- A C and G T-to- A T mutation rates in each sample from the G G-to- A G mutation rate, after adjusting for the biased nucleotide composition of the HIV genome in each case ( G C and G T are seldom mutated by hA3G in single cycle in vitro infections [20].

    In silico simulations

    The open reading frames of HIV-1 pIIIB, (the virus used in the in vitro analyses) were used in computer simulations of hA3G-induced mutation (Figure S1). Predefined n G Gn-to-n A Gn mutation rates and the array of defined hA3G nGGn-to-nAGn mutation preferences [20] were used to determine the probability of mutation of each n G Gn context. The mutation rate required to induce at least one stop codon in open reading frames in 50% of the simulations (the Lethal Mutation 50% or LM50) was determined from 100,000 simulations of 100 incremental mutation rates in simulations of G-to-A, GG-to-AG and nGGn-to-nGAn mutations. Other thresholds such as LM95 and LM99 were also determined. The proportion of simulations without non-synonomous substitutions and the LM50 was also determined using the defined nGGn-to-nAGn mutation preferences [20].

    The simulations did not account for the proposed twin gradient hypothesis for hypermutation, whereby the hA3G-induced mutational burden across individual genomes is proposed to increase from minima at the polypurine tracts (PPTs) in a plus-strand 5′-3′ direction [17], [33] as existing data are insufficient to model this effect [20], [78]. Nevertheless, since the twin gradient hypothesis predicts higher levels of mutation in the structural pol and env genes (most distal to the 3′ ends of the PPTs), we predict that at a given mutation rate, simulations modeling this effect would yield increased numbers of stop codons in these genes with respect to the simulations described thus our estimates are conservative.


    Watch the video: 008 The All-or-None Action Potential (May 2022).