We are searching data for your request:
Upon completion, a link will appear to access the found materials.
I am currently taking a course called "Introduction to Machine Learning with ENCOG 3", and I have a question about how well the Artificial Intelligence (AI) algorithm for a "neural network" corresponds with how an actual neuron works.
In the course, they model a neuron like this:
x1, x2, etc. are voltage inputs, the wij are weights. The inputs are multiplied by these weights and summed up by the neuron. The neuron then has an "activation function" which then takes the sum of weighted inputs and calculates an output, typically a value between 0 and 1 (or between -1 and 1). You can think of the wij as representing dendrites (a higher weight means a more dense and thus conductive dendrite), and the output of the activation function as the voltage that gets sent down an axon.
The AI neural network algorithm creates a kind of intelligence by modifying the weights (wij shown in the picture).
My first questions is: Is this a good approximation as to how neurons actually work? That is, do our neurons "learn" by changing the weights (dendrite density, conductivity)? Or, is there some other mechanism that is more important (e.g. do neurons learn by changing their activation or summation functions?)
My second question is: If neurons really do learn by changing the density of dendrites, then how fast does this happen? Is this a fast process like DNA replication? Does the neuron quickly generate (or decrease) dendrite density when it receives some kind of biochemical signal that it needs to learn now?
I understand that much of this might not yet be known, but would like to know how well the AI algorithm corresponds with current theories on biological neural networks.
With respect to your first question, that model isn't intended to take time into account, but is based on Hebbian learning with the goal of computability. It's generally used in simple pattern recognition situations where each case has no bearing on the next. The learning portion is performed ahead of time during the training phase. For example, a deterministic perceptron isn't permitted to change after training. In contrast, the Biological Neuron Model is much more complex, and uses a variety of ordinary differential equations to integrate the cumulative behavior of neurons over time. As such, those models are non-deterministic and don't see as much practical use outside of experimentation.
To address your second question, neurons themselves don't "learn." A single neuron is essentially meaningless. Learning is an emergent process created by the interaction of several systems at once. While one influencing factor is connectivity (zero for no connection, non-zero for the inhibitory or excitatory connection which emulates both synaptic and non-synaptic), what you might call short term learning can also be performed by existing clusters of neurons, without any change in connectivity. Biologically, this is what must occur before any of the comparably slow process of tissue remodelling can take place, and the computationally equivalent process is only possible in time-aware models like the aforementioned Biological Neuron Model.
Take, for example, someone who wishes to learn to play guitar. When they begin playing, existing clusters emulate the desired behavior as best as they can. These neurons act as the functional scaffold that initiates and drives the neuroplastic process. Playing becomes easier because this scaffold becomes more efficient as new connections (shortcuts) are created, existing connections are strengthened, and irrelevant connections are inhibited. The improved scaffold in turn allows further improvements. Newborn neurons may also migrate to the area, though the how, why, and when of that process is unclear to me. This "behavior emulation" or "short term learning" process used in practicing the guitar, or whenever a novel situation is encountered, must be primarily governed by excitatory and inhibitory neurons' influence. Otherwise the whole process cannot even begin.
This model was a reasonable approximation when it was formulated. However, as we learn more about synaptic integration, it appears that neurons are best approximated by two-layer neural networks (1). In this way the active properties of individual dendrites are better taken into account. Each dendrite, or at least a group of dendrites, is viewed as capable of performing input summation from individual synapses, independently of the integration at the level of the soma.
Regarding your first specific question, mechanisms of plasticity other than modifying the individual weights have also been described, for example as you suggested, by modifying the intrinsic biophysical properties of neurons that lead to firing (2). These would be equivalent to altering the sigmoid function or all the weights collectively.
Regarding your second question, we do not think that changes in the weights happen through modification of the dendrite as a whole. Rather, they represent changes at the level of single synapses, what are termed 'synaptic weights'. The biological phenomenon underlying synaptic weight change is referred to as synaptic plasticity. It is thought to manifest through a variety of biophysical mechanisms, including incorporation of new receptors at the post-synapse, changes in those receptors' conductivity, increase in the number of pre-synaptic vesicles etc. These events can be quite fast, in the order of seconds to minutes (3). More long-term changes of the weights, referred to as long-term potentiation are thought to depend on protein synthesis which is a slower process.
1) Poirazi, P., Brannon, T., & Mel, B. W. (2003). Pyramidal neuron as two-layer neural network. Neuron, 37(6), 989-99. http://www.cell.com/neuron/fulltext/S0896-6273(03)00149-1
3) Harvey, C. D., & Svoboda, K. (2007). Locally dynamic synaptic learning rules in pyramidal neuron dendrites. Nature, 450(7173), 1195-1200. https://doi.org/10.1038/nature06416
Deep Learning Neural Networks Explained in Plain English
Machine learning, and especially deep learning, are two technologies that are changing the world.
After a long "AI winter" that spanned 30 years, computing power and data sets have finally caught up to the artificial intelligence algorithms that were proposed during the second half of the twentieth century.
This means that deep learning models are finally being used to make effective predictions that solve real-world problems.
It's more important than ever for data scientists and software engineers to have a high-level understanding of how deep learning models work. This article will explain the history and basic concepts of deep learning neural networks in plain English.
Why initial thoughts?
Why not just jump down to perspectives and resources? Learning Sciences research (e.g. Mueller, 2008) has shown clearly that making a ‘commitment’ to a specific stance or statement improves your learning, even if that belief later changes. Making your beliefs explicit is vital to the learning process.
Here are some questions to help you get started:
- What do I know about neurons?
- What do I think I know about neurons but am unsure of?
- What would I like to learn more about neurons?
The Science of Practice: What Happens When You Learn a New Skill
You've heard the expression “practice makes perfect” a million times, and you've probably read Malcolm Gladwell's popular “10,000 hours” theory. But how does practice actually affect the brain? What's going on in there when you're learning something new? The team from social sharing app Buffer investigates.
Learning Rewires Our Brains
When we learn a new skill, whether it’s programming in Ruby on Rails, providing customer support over the phone, playing chess, or doing a cartwheel, we're changing how our brain is wired on a deep level. Science has shown us that the brain is incredibly plastic–meaning it does not “harden” at age 25 and stay solid for the rest of our lives. While certain things, especially language, are more easily learned by children than adults, we have plenty of evidence that even older adults can see real transformations in their neurocircuitry.
But how does that really work? Well, in order to perform any kind of task, we have to activate various portions of our brain. We've talked about this before in the context of language learning , experiencing happiness, and exercising and food . Our brains coordinate a complex set of actions involving motor function, visual and audio processing, verbal language skills, and more. At first, the new skill might feel stiff and awkward. But as we practice, it gets smoother and feels more natural and comfortable. What practice is actually doing is helping the brain optimize for this set of coordinated activities, through a process called myelination.
How Nerve Signals Work
A little neuroscience 101 here: neurons are the basic cellular building blocks of the brain. An neuron is made up of dendrites, which receives signals from other neurons the cell body, which processes those signals and the axon, a long “cable” that reaches out and interacts with other neurons' dendrites. When different parts of the brain communicate and coordinate with each other, they send nerve impulses, which are electrical charges that travel down the axon of a neuron, eventually reaching the next neuron in the chain.
This episode introduces neuroplasticity- which is how our brain and nervous system learns and acquires new capabilities. I describe the differences between childhood and adult neuroplasticity, the chemicals involved and how anyone can increase their rate and depth of learning by leveraging the science of focus. I describe specific tools for increasing focus and learning. The next two episodes will cover the ideal protocols for specific types of learning and how to make learning new information more reflexive.
Thank you to our sponsors:
- 00:00 Introduction
- 03:50 Plasticity: What Is it, & What Is It For?
- 06:30 Babies and Potato Bugs
- 08:00 Customizing Your Brain
- 08:50 Hard-Wired Versus Plastic Brains
- 10:25 Everything Changes At 25
- 12:29 Costello and Your Hearing
- 13:10 The New Neuron Myth
- 14:10 Anosmia: Losing Smell
- 15:13 Neuronal Birthdays Near Our Death Day
- 16:45 Circumstances for Brain Change
- 17:21 Brain Space
- 18:30 No Nose, Eyes, Or Ears
- 19:30 Enhanced Hearing and Touch In The Blind
- 20:20 Brain Maps of The Body Plan
- 21:00 The Kennard Principle (Margaret Kennard)
- 21:36 Maps of Meaning
- 23:00 Awareness Cues Brain Change
- 25:20 The Chemistry of Change
- 26:15 A Giant Lie In The Universe
- 27:10 Fathers of Neuroplasticity/Critical Periods
- 29:30 Competition Is The Route to Plasticity
- 32:30 Correcting The Errors of History
- 33:29 Adult Brain Change: Bumps and Beeps
- 36:25 What It Takes to Learn
- 38:15 Adrenalin and Alertness
- 40:18 The Acetylcholine Spotlight
- 42:26 The Chemical Trio For Massive Brain Change
- 44:10 Ways To Change Your Brain
- 46:16 Love, Hate, & Shame: all the same chemical
- 47:30 The Dopamine Trap
- 49:40 Nicotine for Focus
- 52:30 Sprinting
- 53:30 How to Focus
- 55:22 Adderall: Use & Abuse
- 56:40 Seeing Your Way To Mental Focus
- 1:02:59 Blinking
- 1:05:30 And Ear Toward Learning
- 1:06:14 The Best Listeners In The World
- 1:07:20 Agitation is Key
- 1:07:40 ADHD & ADD: Attention Deficit (Hyperactivity) Disorder
- 1:12:00 Ultra(dian) Focus
- 1:13:30 When Real Change Occurs
- 1:16:20 How Much Learning Is Enough?
- 1:16:50 Learning In (Optic) Flow/Mind Drift
- 1:18:16 Synthesis/Summary
- 1:25:15 Learning With Repetition, Forming Habits
As always, thank you for your interest in science!
Please note that The Huberman Lab Podcast is distinct from Dr. Huberman's teaching and research roles at Stanford University School of Medicine. The information provided in this show is not medical advice, nor should it be taken or applied as a replacement for medical advice. The Huberman Lab Podcast, its employees, guests and affiliates assume no liability for the application of the information discussed.
Using NEURON - General questions
To start NEURON and bring up the NEURON Main Menu toolbar (which you can use to build new models and load existing ones) :
- UNIX/Linux : type nrngui at the system prompt.
- OS X : double click on the nrngui icon in the folder where you installed NEURON.
- MSWin : double click on the nrngui icon in the NEURON Program Group (or in the desktop NEURON folder).
To start NEURON from python and bring up the NEURON Main Menu, launch "python" then type
To make NEURON read a file called foo.hoc when it starts :
- UNIX/Linux : type nrngui foo.hoc at the system prompt. This also works for ses files.
- OS X : drag and drop foo.hoc onto the nrngui icon. This also works for ses files.
- MSWin : use Windows Explorer (not Internet Explorer) to navigate to the directory where foo.hoc is located, and then double click on foo.hoc . This does not work for ses files.
To exit NEURON : type quit() or ^D ("control D") at the oc> prompt, or use File / Quit in the NEURON Main Menu toolbar.
Installation went smoothly, but every time I bring NEURON up, the interpreter prints this strange message: "jvmdll" not defined in nrn.def JNI_CreateJavaVM returned -1
You must be running an old version of NEURON. Warnings about Java, such as "Can't create Java VM" or "Info: optional feature is not present" mean that NEURON can't find a Java run-time environment. This is of interest only to individuals who are using Java to develop new tools. NEURON's computational engine, standard GUI library, etc. don't use Java.
What's the best way to learn how to use NEURON?
How do I create a NEURON model?
By specifying representations in the computer of the three basic physical components of an actual experiment.
|Component||Wet lab||Computational modeling|
|Experimental preparation |
What is the biology itself?
|brain slice, tissue culture etc.||specification of what anatomical and biophysical properties to represent in model|
How will you stimulate it and record results?
|voltage/current clamp, electrodes, stimulator, recorder etc.||--computational representations of clamps, electrodes etc. |
--specification of which variables to monitor and record
How do you automate the experimental protocol?
|programmable pulse generators etc.||time step, when to stop, integration method, optimization algorithms|
The classical approach to using NEURON is to specify all three components by writing a program in hoc, NEURON's programming language. You can do this with any editor you prefer, as long as it can save your code to an ASCII text file. Make sure your hoc files end with the extension .hoc
A more recent approach is to use the NEURON Main Menu toolbar's dropdown menus, which allow you to quickly create a wide range of models without having to write any code at all. You can save the GUI's windows and panels to session files that you can use later to recreate what you built (see the FAQ "What is a ses (session) file?").
The most flexible and productive way to work with NEURON is to combine hoc and the GUI in ways that exploit their respective strengths. Don't be afraid of the GUI--noone will accuse you of being a "girly man" if you take advantange of its powerful tools for model specification, instrumentation, and control. In fact, many of the GUI's most useful tools would be extremely difficult and time consuming to try to duplicate by writing your own hoc code.
Be sure to read the FAQ "Help! I'm having a hard time implementing models!"
Help! I'm having a hard time implementing models!
Here are some general tips about program development.
- Before you write any code, write down an explicit outline of how it should work. Use a "top-down" approach to avoid being overwhelmed at the start by implementational details.
- Successful programming demands an incremental cycle of revision and testing. Start small with something simple that works. Add things one at a time, testing at every step to make sure the new stuff works and didn't break the old stuff.
- Comment your code.
- Use a "modular" programming style. At the most concrete level, this means using lots of short, simple procs and funcs.
Also, "don't throw all your code into one giant hoc (or ses) file." Regardless of whether you use hoc, the GUI, or both, it will be much easier to create and revise programs if you keep model specification (the "experimental preparation") separate from instrumentation and control (the "user interface"). You might even put them in separate files, e.g. "model.hoc" might contain the code that specifies the anatomy and biophysics of your model cell or network, and "rig.ses" might specify a RunControl panel and other graphical tools that you use to run simulations, apply stimuli, and display results. Then you create a third file, called "init.hoc", which contains the following statements :
load_file("nrngui.hoc") // get NEURON's gui library
load_file("model.hoc") // the model specification
load_file("rig.ses") // the instrumentation, control, and user interface
When NEURON executes init.hoc, up comes your model and user interface.
This greatly simplifies program development, testing and maintentance. For example, complex models and experimental rigs can be constructed in an incremental manner, so that init.hoc grows to contain many load_file statements.
- Mine other code (e.g. the Programmers' Reference) for reusable or customizable working examples. "Good programmers imitate great code, great programmers steal great code." But test all code.
Why can't NEURON read the text file (or hoc file) that I created?
The Mac, MSWin, and UNIX/Linux versions of NEURON can read ASCII text files created under any of these operating systems. ASCII, which is sometimes called "plain text" or "ANSI text", encodes each character with only 7 bits of each byte. Some text editors offer alternative formats for saving text files, and if you choose one of these you may find that NEURON will not read the file. For example, Notepad under Win2k allows files to be saved as "Unicode text", which will gag NEURON.
How do I print a hard copy of a NEURON window?
Use the Print & File Window Manager (PFWM). Download printing.pdf to learn how.
How do I plot something other than membrane potential?
How do I save and edit figures?
The quick and dirty way is to capture screen images as bitmaps. The results are suitable for posting on WWW sites but resolution is generally too low for publication or grant proposals, and editing is a pain. For highest quality, PostScript is best. Use the Print & File Window Manager PFWM) to save the graphs you want to an Idraw file. This is an encapsulated PostScript format that can be edited by idraw, which comes with the UNIX/Linux version of NEURON. It can also be imported by many draw programs, e.g. CorelDraw. To learn more, see this tutorial from the NEURON Summer Course.
I've used the NEURON Main Menu to construct and manage models. How can I save what I have done?
Here's how to save the GUI tools you spawned to a session file.
What is a ses (session) file? Can I edit it?
A session file is a plain text file that contains hoc statements that will recreate the windows that were saved to it. It is often quite informative to examine the contents of a ses file, and sometimes it is very useful to change the file's contents with a text editor. Read thisfor more information.
How do I use NEURON's tools for electrotonic analysis?
See this sample lesson from the NEURON Summer Course
Why should I use an odd value for nseg?
So there will always be a node at 0.5 (the middle of a section).
Read about this in "NEURON: a Tool for Neuroscientists" by Hines & Carnevale.
What's a good strategy for specifying nseg?
Probably the easiest and most efficient way is to use what we call the d_lambda rule, which means "set nseg to a value that is a small fraction of the AC length constant at a high frequency."
Get a copy of "NEURON: a Tool for Neuroscientists", which explains how it works.
Read how to use the d_lambda rule with your own models.
How do I change the background color used in NEURON's shape plots and other graphs?
How do I change the color scale used in shape plots?
I see an error message that says . procedure too big in ./foo.hoc .
Where can I find examples of mod files?
How do I compile mod files?
Depends on whether you're running NEURON under MSWindows,UNIX/Linux, OS X, or MacOS. Whichever you use, it's a good idea to keep related mod files in the same directory as the hoc files that need them.
I can't get mod files to compile.
Go to The NEURON Forum and check out the "NEURON installation and configuration" discussions for your particular operating system (OS X, MSWin, UNIX/Linux). For OS X and UNIX/Linux this problem often means that the software development environment (compilers and associated libraries) is missing or incomplete.
I installed a new version of NEURON, and now I see error messages like this: 'mechanisms fooba needs to be re-translated. its version 5.2 "c" code is incompatible with this neuron version'.
Compiling NMODL files produces several "intermediate files" whose names end in .o and .c . This error message means that you have some old intermediate files that were produced under the earlier version of NEURON. So just delete all the .o and .c files, then run nrnivmodl (or mknrndll), and the problem should disappear.
Is there a list of the functions that are built into NMODL?
Is there a list of the functions that are built into hoc?
You'll find them in the Programmer's Reference. Also see chapter 11. Interpreter - General in the old "Reference Manual."
What units does NEURON use for current, concentration, etc.?
If you're using the GUI, you've probably noticed that buttons next to numeric fields generally indicate the units, such as (mV), (nA), (ms) for millivolt, nanoamp, or millisecond.
Here's a chart of the units that NEURON uses by default.
If you're writing your own mod files, you can specify what units will be used. For example, you may prefer to work with micromolar or nanomolar concentrations when dealing with intracellular free calcium or other second messengers. You can also define new units. See this tutorial to get a better understanding of units in NMODL.
For the terminally curious, here is a copy of the units.dat file that accompanies one of the popular Linux distributions. Presumably mod file variables should be able to use any of its entries.
When I type a new value into a numeric field, it doesn't seem to have any effect.
You seem to be using a very old version of NEURON. If you can't update to the most recent version, try this:
After entering a new value, be sure to click on the button next to the numeric field (or press the Return key) so that the bright yellow warning indicator on the button is turned off. While the yellow indicator is showing, the field editor is still in entry mode and its contents have not yet been assigned to the proper variable in memory.
What is the difference between SEClamp and VClamp, and which should I use?
SEClamp is just an ideal voltage source in series with a resistance (Single Electrode Clamp), while VClamp is a model of a two electrode voltage clamp with this equivalent circuit:
If the purpose of your model is to study the properties of a cell, use SEClamp. If the purpose is to study how instrumentation artefacts affect voltage clamp data, use VClamp. For more information about these and other built-in point process mechanisms, go to the Programmer's Reference and click on the term pointprocesses.
SEClamp and IClamp just deliver rectangular step waveforms. How can I make them produce an arbitrary waveform, e.g. something that I calculated or recorded from a real cell?
The Vector class's play method can be used to drive any variable with a sequence of values stored in a Vector. For example, you can play a Vector into an IClamp's amp, an SEClamp's amp1, an SEClamp's series resistance rs (e.g. if you have an experimentally measured synaptic conductance time course). To learn how to do this, get vectorplay.zip, which contains an exercise from one of our 5-day hands-on NEURON courses. Unzip it in an empty directory. This creates a subdirectory called vectorplay, where you will find a file called arbforc.html
Open this file with your browser and start the exercise.
I just want a current clamp that will deliver a sequence of current pulses at regular intervals. Vector play seems like overkill for this.
Right you are. Pick up pulsedistrib.zip, and unzip it into an empty directory. This creates a subdirectory called pulsedistrib, which contains Ipulse1.mod, Ipulse2.mod, readme.txt, and test_1_and_2.hoc. Read readme.txt, compile the mod files, and then use NEURON to load test_1_and_2.hoc, which is a simple demo of these two current pulse generators.
pulsedistrib also contains ipulse3.mod, ipulse3rig.ses, and test_3.hoc, which address the next question in this list.
I want a current clamp that will generate a pulse when I send it an event, or that I can use to produce pulses at precalculated times.
Then get pulsedistrib.zip, and unzip it. Inside the pulsedistrib subdirectory you'll find ipulse3.mod, ipulse3rig.ses, and test_3.hoc (and some other files that pertain to the previous question). ipulse3.mod contains the NMODL code for a current clamp that produces a current pulse when it receives an input event. test_3.hoc is a simple demo of the Ipulse3 mechanism, and ipulse3rig.ses is used by test_3.hoc to create the GUI for a demo of Ipulse3. It uses a NetStim to generate the events that drive the Ipulse3. If you want to drive an Ipulse3 with recorded or precomputed event times, use the VecStim class as described under the topic Driving a synapse with recorded or precomputed spike events in the "Hot tips" area of the NEURON forum.
I have a set of recorded or calculated spike times. How can I use these to drive a postsynaptic mechanism?
Assuming that your synaptic mechanism has a NET_RECEIVE block, so that it is driven by events delivered by a NetCon, I can think of two ways this might be done. Which one to use depends on how many calculated spike times you are dealing with.
If you only have a "few" spikes (up to a few dozen), you could just dump them into the spike queue at the onset of the simulation. Here's how:
1. Create a Vector and load it with the times at which you want to activate the synaptic mechanism.
2. Then use an FIinitializeHandler that stuffs the spike times into the NetCon's event queue by calling the NetCon class's event() methodduring initialization.
For example, if the Vector that holds the event times is syntimes, and the NetCon that drives the synaptic point process is nc, this would work:
Don't forget that these are treated as delivery times, i.e. the NetCon's delay will have no effect on the times of synaptic activation. If additional conduction latency is needed, you will have to incorporate it by adding the extra time to the elements of syntimes before the FInitializeHandler is called.
If you have a lot of spikes then it's best to use an NMODL-defined artificial spiking cell that generates spike events at times that are stored in a Vector (which you fill with data before the simulation). For more information see Driving a synapse with recorded or precomputed spike events in the "Hot tips" area of the NEURON forum.
How can I read data from a binary PClamp file?
clampex.zip contains a mod file that defines an object class (ClampExData) whose methods can read PClamp binary files--or at least it could several years ago--plus a sample data file and a hoc file to illustrate usage. If ClampExData doesn't work with the most recent PClamp file formats, at least clampex.mod is a starting point that you can modify as needed.
How do I exit NEURON? I'm not using the GUI, and when I enter ^D at the oc> prompt it doesn't do anything.
You seem to be using an older MSWin or MacOS version of NEURON (why not get the most recent version?). Typing the command
at the oc> prompt works for all versions, new or old, under all OSes. Don't forget the parentheses, because quit() is a function. Oh, and you need to press the Enter or Return key too.
Memory, learning and decision-making studied in worms
As anyone who has ever procrastinated knows, remembering that you need to do something and acting on that knowledge are two different things. To understand how learning changes nerve cells and leads to different behaviors, researchers studied the much simpler nervous system of worms.
"In this study, we can now translate neuronal activity to behavioral response," said Project Researcher Hirofumi Sato, a neuroscientist at the University of Tokyo and first author of the research paper recently published in Cell Reports.
The discovery was made possible using technology that researchers describe as a "robot microscope," first developed in 2019 by researchers at Tohoku University in Miyagi Prefecture, northeastern Japan.
The technique involves genetically modifying the worms to add fluorescent tags onto specific molecules. The microscope then detects and tracks the fluorescent light as a worm crawls around, meaning researchers can watch chemical signals travel through and between individual neurons in awake, unrestrained animals.
The worms used in research studies, C. elegans, don't eat pure salt, but researchers can train worms to associate high or low salt levels in their environment with food. When transferred to any new environment, trained worms will begin searching for food using salt levels as a clue about which direction they should go. For example, if worms learned to expect food in high-salt areas but they notice that salt levels are decreasing as they travel, the worms will stop and change directions to try to find a higher salt level. With additional training, worms can also learn the opposite food-salt level association.
Neuroplasticity, or the brain's ability to change and "rewire" neurons, is essential for any learned behavior. The mystery for the scientific community is how different environmental clues (high or low salt) can lead to the same physical behavior (stop and change direction).
"Many animals show this flexible learned behavior pattern, so we want to understand the mechanism," said Sato.
This type of behavior requires a sensory neuron (which detects salt), motor neurons (which control movement) and interneurons (which communicate between the other two types). Although C. elegans only have 302 neurons in their entire 1-centimeter-long bodies, these same types of neurons exist in humans and communicate using the same signal molecule.
Specifically, that signal molecule is glutamate, widely recognized as one of the brain's most important signaling molecules.
"We know that if there is a defect in glutamate signaling, that might cause Alzheimer's disease or other neuronal diseases," said Sato.
The UTokyo team's new data found two different types of glutamate receptors on the same interneuron are involved in the worms' behavior. Both inhibitory and excitatory glutamate receptors respond in the same pattern, but at different intensities based on whether the worms had learned to seek high or low salt concentrations.
The exact mechanism controlling the motor neuron's signals to the interneuron's glutamate receptors remains unclear. However, this is one of the first documentations of glutamate signaling between sensory and interneurons showing experience-dependent plasticity.
Future research will continue to investigate exactly how the sensory neuron and interneuron communicate.
Hirofumi Sato, Hirofumi Kunitomo, Xianfeng Fei, Koichi Hashimoto, and Yuichi Iino. 25 May 2021. Glutamate signaling from a single sensory neuron mediates experience-dependent bidirectional behavior in Caenorhabditis elegans. Cell Reports. DOI: 10.1016/j.celrep.2021.109177 https:/ / doi. org/ 10. 1016/ j. celrep. 2021. 109177
Project Researcher Hirofumi Sato
Department of Biological Sciences, Graduate School of Science, The University of Tokyo, 113-0033 Tokyo, Japan
Email: [email protected]
Press Officer Contact
Ms. Caitlin Devor
Division for Strategic Public Relations, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 133-8654, JAPAN
Email: [email protected]
About the University of Tokyo
The University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 4,000 international students. Find out more at http://www. u-tokyo. ac. jp/ en/ or follow us on Twitter at @UTokyo_News_en.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
ELI5: How does the "logic" of neurons in our brain differ from logic gates in circuits?
As far as I know, almost all computer systems are based upon the logic gate systems of AND, OR, and so forth, and its large systems of these gates that allows a circuit to do something like addition. Do neurons have a similar system, and if so, do we know or understand the logic of this system?
Neuronal circuits are analog.
They have several different neurotransmitters (rather than just an electron or lack thereof for on/off binary) that have different functions.
Some neurotransmitters are feed forward (they add a signal on the following neuron), others inhibit the forward neuron, as well as the reverse one. Other neurotransmitters modulate the strength of the next signal, or modulate the strength of the next 50 or 100 signals, by changing the expression of receptors. Furthermore, the amplitude AND frequency of neurotransmitter impulses will affect the signal that is sent forth. Lots of low or high frequency messaging with the same neurotransmitter means different things.
Not all of this has been teased out yet. But one example of how amplitude also matters is that a small pulse may not trigger the next neuron in a circuit, but might make it more likely to trigger to a second pulse (like an AND gate). A strong signal might immediately make a neuron fire, but also make it LESS likely to fire the next time. This type of stop/go circuitry allows for some very complex processing.
Finally, the other unique thing is that neurons have a property called plasticity. They can form new connections, so as you learn, your brain will actually change up the circuitry to better accommodate the tasks you are doing. That means connections (synapses) change. This is something that CPU's and GPU's cannot do - the pathing of circuits in a CPU / GPU is entirely fixed. Not so in the brain!
This is why when young kids have brain damage, they are able to regain some of the abilities they lost - for example, learning to speak again after severe brain trauma. The brain re-adapts other circuitry for that. Overall, the brain is incredibly complicated. One of my research projects is on the nature of consciousness and how it arises from neuronal circuitry.
Here are some of the basic neuronal circuits. Keep in mind this COMPLETELY ignores the presence of multiple types of neurotransmitters, and that many neurons will have other neurons from other circuits feeding into them, with various transmitters, in order to further modulate each circuit.
What does this look like in real life? Here's the inside of the cerebellum. You can see that there are circuits which are accepting incoming input from deeper in the brain, cell bodies near the surface, and then a lot of little fibers and connections that run perpendicular at the surface, cross-connecting these neurons. These circuits handle balance and coordination, comparing your input from the brain, to your muscle response, and then suggesting better inputs to your motor cortex in the brain, based on sensor nerves (called proprioceptors) inside your muscles and tendons. these are the circuits alcohol inhibits, so when people act drunk and stumble around, these have been turned off by alcohol :)
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
 ↑ Reece, J. B., Urry, L. A., Cain, M. C., Wasserman, S. A., Minorsky, P. V., and Jackson, R. B. 2014. Campbell Biology. 10th Edn. Glenview, IL: Pearson Education.
 ↑ Kennedy, M. B. 2016. Synaptic signaling in learning and memory. Cold Spring Harbor Perspect. Biol. 8:1𠄳. doi: 10.1101/cshperspect.a016824
 ↑ Slater, C. R. 2017. The structure of human neuromuscular junctions: some unanswered molecular questions. Int. J. Mol. Sci. 18:2183. doi: 10.3390/ijms18102183
 ↑ Merzendorfer, H. 2005. Muscular dystrophies: a novel player. J. Exp. Biol. 208:7. doi: 10.1242/jeb.01388
 ↑ Clarkson, C., Antunes, F. M., and Rubio, M. E. 2016. Conductive hearing loss has long-lasting structural and molecular effects on presynaptic and postsynaptic structures of auditory nerve synapses in the cochlear nucleus. J. Neurosci. 36:10214. doi: 10.1523/JNEUROSCI.0226-16.2016
 ↑ Pottackal, J. 2015. Early events of synapse disassembly in the damaged retina. J. Neurosci. 35:9539. doi: 10.1523/JNEUROSCI.1340-15.2015
Richards on the Brain
Neuron Development: in human brains, approximately 10 billion cells are needed to form just the “cerebral cortex” that blankets a single hemisphere. To produce such a large number of cells, about 250,000 neurons must be born per minute at the peak of “prenatal” brain development. (Kolb, 195)
As neurons are born, they migrate to their proper locations in the brain and connections between them and other neurons begin to form. (Goldberg, 39) Recently, neuroscientists have discovered that the brain does change throughout life. (Best of the Brain-Fred Gage, 121) “Neurogenesis” continues into old age, though at a slower rate than in earlier decades. And even that slowdown may not be inevitable, but rather a side effect of 'monotony.' Adding 'complexity' to a person’s social environment primes new “learning,” enhancing the rate at which the brain adds new cells. (Goleman, 239)
Arborization: a process of early growth of “dendrites.” (Goldberg, 39) Dendrites begin as individual "processes" protruding from the "cell body." Later, they develop increasingly complex extensions that look much like the branches of trees visible in winter. (Kolb, 198) Verb - ‘arborize.’
Cell Migration: (process of) newly formed cells traveling to their correct location. Begins about 8 weeks after “conception” and is largely complete by about 29 weeks. (Kolb, 195-196) During “embryonic development” in animals, cells migrate to their appropriate positions within the body. (Brooker, 203) Only 50% on average migrate successfully, the others perish. Newborn stem cells need to move away from their “precursors” before (the precursors) can “differentiate.” (Best of the Brain-Fred Gage, 123)
Myelination: the process by which “glial” cells wrap around long axons, forming a fatty protective coating called “myelin.” The dramatic increase in brain weight during the first years of life is largely due to myelination. The brain structures are not fully functional until the axons connecting them are insulated with myelin, and the time course of myelination varies vastly from structure to structure. (Goldberg, 40-41) Begins at the “axon hillock” and stops at the “axon terminal.” (Characterized by) discontinuous segments. (Patestas, 30) Adjective - ‘myelinated.’
Internode: each myelinated segment (of an axon). (Patestas, 30)
Myelin Sheath: multilayered wrapping of cell membrane around an axon. The electrical insulation on axons that is formed by “oligodendrocytes” in the “central nervous system” and “Schwann cells” in the “peripheral nervous system.” (Fields, 317) Allows faster and more energetically efficient conduction of impulses. (GHR) Acts as an electrical insulator that allows nerve impulses to travel faster by increasing the ‘resistance’ and decreasing the ‘capacitance’ over that found in unmyelinated nerve fibers. (NCIt) Myelin is white, which gave rise to the term “white matter” as opposed to the term “gray matter” which includes all the neuron (neuron bodies and dendrites) and short local non-myelinated “pathways.” Facilitates signal transmission along the axon, greatly enhancing and improving transmission of information within large coordinated neuronal ensembles. (Goldberg, 40) Also referred to as ‘myelin.’
Nodes of Ranvier: the ‘discontinuities’ of myelin between adjacent internodes. (Patestas, 30) Regularly spaced gaps in the myelin sheaths of peripheral axons. Allow ‘saltatory conduction,’ that is, jumping of impulses from node to node, which is faster and more energetically favorable than continuous conduction. (MeSH) Richly endowed with "voltage" sensitive "channels." Tiny gaps in the myelin sheath. Sufficiently close to one another that an "action potential" occurring at one node can trigger the opening of voltage sensitive gates at an adjacent node. In this way, a relatively slow action potential jumps at the speed of light from node to node. (Kolb, 128)
Neurogenesis: the process of forming neurons. Begins about 7 weeks after conception and is largely complete by 20 weeks. (Kolb, 196) The process of “stem cells” dividing and developing into functional new brain cells in the brain. That this happens in the adult human was firmly established in 1998. (Ratey, 282) The brain’s daily manufacture of new neurons. Neurogenesis is regulated by a variety of naturally occurring molecules called “growth factors.” (Best of the Brain-Fred Gage, 125) The important element ‘FGF-2’ that helps tissue grow, is necessary for neurogenesis and is increased during exercise. As we age, production of FGF-2, “BDNF” and other ‘growth factors’ naturally tails off, bringing down neurogenesis with it. (Ratey, 53) Editor's note - for “glial cells,” referred to as 'gliogenesis.’ Also referred to as ‘cell birth.’
Neuron Differentiation: process of “precursor” cells (changing) into the right type of neuron or “glial” cell. Begins about 8 weeks (after conception) and is largely complete by about 29 weeks. (Kolb, 195-196) The emergence of distinct types of cells in the brain does not result (only) from the unfolding of a specific genetic program. Instead, it is due to the interaction of genetic instructions, timing, and signals from other cells in the local environment. (Kolb, 198)
Synaptic Pruning: process to remove excess synapses in our brain— the synapses we don’t use. Begins quite early in childhood and peaks in adolescence and early adulthood. While “autism” involves insufficient synaptic pruning, “schizophrenia” involves excessive synaptic pruning. (Kandel4, 51) Brain process of getting rid of unneeded neurons. Occurs after birth and also unfolds at different time courses for different parts of the brain, the “frontal cortex” being the last. Pruning is akin to ‘sculpting,’ a process that the great sculptor Augusta Rodin described as ‘eliminating everything that does not belong.’ Pruning is not random, but rather is a consequence of reinforcing heavily used neural structures and letting go of those under-used or not used at all. (Goldberg, 40) During the final months of prenatal development, there is fierce competition among neurons to make connections and survive. Neurons that don’t make connections are eliminated. (Hockenbury, 377) Also referred to ‘pruning’ and ‘neuronal pruning.’