We are searching data for your request:
Upon completion, a link will appear to access the found materials.
How substantial is the difference between the neural signal associated with seeing an image and the imagination of that image? Surely, it can not entirely copy the pathway from the sensory organs to the emotional centers, but approximately how large is the common pathway for both processes?
Is there some principal difference in the neural pathway of these two processes, or is the imagined image just more of a slurred version of the visual image?
Visual perception and visual imagery draw on much of the same neural machinery.
I have interpreted your question as: What are the common neural circuitries between visual sensation and the imagination of sensation?
In neuroimaging, mental imagery of visual images is a big deal. For example, there is a large body of literature on cross-modal activation of visual areas in the brain in blind people. It is known that visual deprivation results in neural plasticity and the recruitment of visual areas in the brain for other sensory systems. For example, blind Braille readers show activation of the primary visual cortex when reading Braille (reviewed in Stronks et al., 2015). However, interpretation of these findings is difficult in late-blind individuals, because they have experienced visual input earlier in life. Hence, while Braille reading they can be mentally reproducing the visual representation of the Braille cells using visual neural circuitry.
Indeed, it has been shown with fMRI that visual imagery and visual perception draw on most of the same neural machinery (Ganis et al., 2004). However, the spatial overlap of the activated regions is neither complete nor uniform. The overlap in this study was more pronounced in frontal and parietal regions than in temporal and occipital regions, indicating that cognitive control processes function comparably in both imagery and perception, but not identically.
Various studies reveal different results, however. In another imaging study, 'just' two-thirds of brain regions overlapped in visual sensation and imagery (Kosslyn et al., 1997). Indeed, the experimental task used may have important effects on study outcomes.
Most notably in this regard is that approximately half of the studies done have found activation of the primary visual cortex during imagery (Kosslyn et al., 1999). This is interesting, because the primary visual cortex is generally thought to be an early, low-level area in the visual system that depends on thalamic input that relays information from the optic nerve to the brain. I.e., it is generally believed to depend on sensory stimulation. Generally, only the higher-level associative visual areas are associated with higher cognitive processes.
- Ganis et al., Cognitive Brain Res (2004); 20: 226-41
- Kosslyn et al., Neuriomage (1997); 6; 320-34
- Kosslyn et al., Science (1999); 284; 167-70
- Stronks et al. Brain Res (2015); 1624: 140-52
I read some articles on the thalamus, can't currently remember which ones though, that suggested that there is overlaps, but when you are imagining something the thalamus blocks the signals from entering the perceptual system. This is still speculative though, as current the resolution of current measurement methods leaves a lot to be desired.
In some mental disorders the signal isn't blocked by the thalamus (if this is what happens) and is instead experienced as an hallucination, there is also some evidence that this also happens in a less intrusive way in some people we perhaps wouldn't not call mentally ill.
Nikola Tesla for example suffered from what we would call hallucinations, but was able to learn to control it to some extent so he could use them as a tool for helping him making machines. I think the following section of his biography is quite interesting.
There was another and still more important reason for my late awakening. In my boyhood I suffered from a peculiar affliction due to the appearance of images, often accompanied by strong flashes of light, which marred the sight of real objects and interfered with my thought and action. They were pictures of things and scenes which I had really seen, never of those I imagined. When a word was spoken to me the image of the object it designated would present itself vividly to my vision and sometimes I was quite unable to distinguish whether what I saw was tangible or not. This caused me great discomfort and anxiety. None of the students of psychology or physiology whom I have consulted could ever explain satisfactorily these phenomena. They seem to have been unique although I was probably predisposed as I know that my brother experienced a similar trouble. The theory I have formulated is that the images were the result of a reflex action from the brain on the retina under great excitation. They certainly were not hallucinations such as are produced in diseased and anguished minds, for in other respects I was normal and composed. To give an idea of my distress, suppose that I had witnessed a funeral or some such nerve-racking spectacle. Then, inevitably, in the stillness of night, a vivid picture of the scene would thrust itself before my eyes and persist despite all my efforts to banish it. Sometimes it would even remain fixed in space though I pushed my hand through it. If my explanation is correct, it should be able to project on a screen the image of any object one conceives and make it visible. Such an advance would revolutionize all human relations. I am convinced that this wonder can and will be accomplished in time to come; I may add that I have devoted much thought to the solution of the problem.
The rest of his autobiography has more details
Is what I see, what I imagine? Study finds neural overlap between vision and imagination
An ibis as "seen" by a machine, 2015. This processed image, which is based on a photograph by Dr. Zachi Evenor, is courtesy of software engineer Guenther Noack, 2015, and is reproduced from Wikimedia Commons (CC BY 4.0). Credit: Dr. Guenther Noack, 2015, reproduced from Wikimedia Commons (CC BY 4.0).
Medical University of South Carolina researchers report in Current Biology that the brain uses similar visual areas for mental imagery and vision, but it uses low-level visual areas less precisely with mental imagery than with vision.
These findings add knowledge to the field by refining methods to study mental imagery and vision. In the long-term, it could have applications for mental health disorders affecting mental imagery, such as post-traumatic stress disorder. One symptom of PTSD is intrusive visual reminders of a traumatic event. If the neural function behind these intrusive thoughts can be better understood, better treatments for PTSD could perhaps be developed.
The study was conducted by an MUSC research team led by Thomas P. Naselaris, Ph.D., associate professor in the Department of Neuroscience. The findings by the Naselaris team help answer an age-old question about the relationship between mental imagery and vision.
"We know mental imagery is in some ways very similar to vision, but it can't be exactly identical," explained Naselaris. "We wanted to know specifically in which ways it was different."
To explore this question, the researchers used a form of artificial intelligence known as machine learning and insights from machine vision, which uses computers to view and process images.
"There's this brain-like artificial system, a neural network, that synthesizes images," Naselaris explained. "It's like a biological network that synthesizes images."
The Naselaris team trained this network to see images and then took the next step of having the computer imagine images. Each part of the network is like a group of neurons in the brain. Each level of the network or neuron has a different function in vision and then mental imagery.
To test the idea that these networks are similar to the function of the brain, the researchers performed an MRI study to see which brain areas are activated with mental imagery or vision.
While inside the MRI, participants viewed images on a screen and were also asked to imagine images at different points on the screen. MRI imaging enabled researchers to define which parts of the brain were active or quiet while participants viewed a combination of animate and inanimate objects.
Once these brain areas were mapped, the researchers compared the results from the computer model to human brain function.
They discovered that both the computer and human brains functioned similarly. Areas of the brain from the retina of the eye to the primary visual cortex and beyond are both activated with vision and mental imagery. However, in mental imagery, the activation of the brain from the eye to the visual cortex is less precise, and in a sense, diffuse. This is similar to the neural network. With computer vision, low-level areas that represent the retina and visual cortex have precise activation. With mental imagery, this precise activation become diffuse. In brain areas beyond the visual cortex, the activation of the brain or the neural network is similar for both vision and mental imagery. The difference lies in what's happening in the brain from the retina to the visual cortex.
"When you're imagining, brain activity is less precise," said Naselaris. "It's less tuned to the details, which means that the kind of fuzziness and blurriness that you experience in your mental imagery has some basis in brain activity."
Naselaris hopes these findings and developments in computational neuroscience will lead to a better understanding of mental health issues.
The fuzzy dream-like state of imagery helps us to distinguish between our waking and dreaming moments. In people with PTSD, invasive images of traumatic events can become debilitating and feel like reality in the moment. By understanding how mental imagery works, scientists may better understand mental illnesses characterized by disruptions in mental imagery.
"When people have really invasive images of traumatic events, such as with PTSD, one way to think of it is mental imagery dysregulation," explained Naselaris. "There's some system in your brain that keeps you from generating really vivid images of traumatic things."
A better understanding of how this works in PTSD could provide insight into other mental health problems characterized by mental imagery disruptions, such as schizophrenia.
"That's very long term," Naselaris clarified.
For now, Naselaris is focusing on how mental imagery works, and more research needs to be done to address the connection to mental health.
A limitation of the study is the ability to recreate fully the mental images conjured by participants during the experiment. The development of methods for translating brain activity into viewable pictures of mental images is ongoing.
This study not only explored the neurological basis of seen and imagined imagery but also set the stage for research into improving artificial intelligence.
"The extent to which the brain differs from what the machine is doing gives you some important clues about how brains and machines differ," said Naselaris. "Ideally, they can point in a direction that could help make machine learning more brainlike."
Learning you've been living life in the dark
When I finally picked myself up off the floor on that August day, I returned to my laptop and descended into an aphantasia Google hole. I read numerous articles that described similar surprise realizations, and even watched half a TED Talk before reaching out to two friends for comfort. I hammered them with visual imagination questions, but each response only confirmed what I feared: People were living with a remarkable ability that I lacked.
I assumed all people saw the same vapid, nondescript nothingness when they closed their eyes.
I called my mom to break the news, but much like the mom detailed in The Guardian piece she was in denial and tried to reassure me that I have a wonderful imagination. "You're telling me you can't picture Barack Obama’s face?" she asked in a baffled tone. "I can’t even picture your face!" I sobbed.
My mind kicked into overdrive and hastily cycled through a series of concerns: Does a lack of visual imagination mean I’m not creative? How did I major in creative writing? Is this why I sucked at penning concrete details in poetry class? Should I even be a writer? Does this explain my struggle to focus when reading fiction? Is Lorde's "Supercut" about a *literal* supercut in her head? Can people just direct personal short films and watch their wildest fantasies play out whenever they damn well please?
That last thought hit me the hardest. As someone who worships television and cinema to a borderline unhealthy extent, the realization that I was essentially missing a screen in my mind that contained endless possibilities — a place where I could project and replay scenes from my own life, envision an endless string of future scenarios, and visually conceptualize my most ambitious and outlandish ideas — was soul crushing.
What is the difference between "seeing things" visually, mentally and hallucinogenically?
I can see things visually, and I can imagine things in my mind, and hallucination is visually seeing an imagined thing. I'm wondering how this works and a few questions in regards to it.
If a person who is currently hallucinating is visually seeing what his mind has imagined, then does that mean that while in this hallucinogenic state where his imagination is being transposed onto his visual image, then if he purposely imagines something else would it override his current hallucination with a new hallucination he thought up? It not, why?
To a degree if I concentrate I can make something look to me as if it is slightly moving, or make myself feel as if the earth is swinging back and forth, subconscious unintentional hallucinations seem much more powerful however, why?
Actually the brain is not a passive receptor of information.
When you get information from the eyes (an electromagnetic signal), it is compacted and sent through the optic nerve to the thalamus.
There it meets a flow of information from the occipital cortex (where most of the visual areas are). Why is this? so the information from the eyes can be compared to the working model of the real world you are ALREADY predicting. You see with the occipital lobe to say it in a simple way. but it needs to be updated, the flow of information that the optic nerve provides help to update the model you have already in your brain. tweaking it to reflect the information being gathered.
If we depended completely on the input from the eyes and we were a passive receptor of information the brain would not be structured like this. and we would need more brainpower to process what we are seeing.
Most of what we see is just an useful representation of the world, but not that faithful. Remember the white with gold / black with blue dress? It has to do with how your brain decides to handle the available information. colors are not real also, it's something the brain makes up.
Lots of things in our perception are actually illusions. and thats ok. the thing is when you hallucinate you are allowing yourself to process something as an actual perception that should have been inhibited. you have a filter that's not working correctly. Some scientists associate this to an overly active dopaminergic system that's teaching you that certain cognitive processes are reflecting the real world when they are not. it's like the filter has a low threshold to select what is real and what is not when thoughts emerge from what you are watching. the network is being overly active, generating representations that should not be there.
So to answer the question, the difference is the source. but illusions happen all the time, illusions are part of the visual processing system, but having a visual processing system that is too lax in the control of the network activation, leads you to see even more things that are not there.
Thanks for the cognitive psych breakdown! Well described.
I watched a ted talk video on this topic and they mentioned the same things you did. What I got out of it is that the visual function of the brain is a constantly-improving model. It takes in input, and compares it to the version of the model that’s currently in use, and if things have changed, the model is updated. In the video they also talked about why psychedelics cause the hallucinations. Basically, they explained that lysergic acid and psilocybin disrupt the model and it can’t make the right connections anymore. You’re still taking in input, but it’s either being changed from the point of intake on the path to the processing unit, or it’s changing how the unit processes and interprets the input.
This line, though, really blew my mind and helped me make a connection:
colors are not real also, it's something the brain makes up.
I just realized that we don’t actually “see” colors. Our brain perceives the difference in “color” between different objects and uniquely identifies each differentiation with a visual color. I was thinking, however, the differentiation the brain does, it would have to map to a physical phenomenon. Color is defined as the wavelength of light the object absorbs and reflects. And even this absorption process maps to a deeper physical phenomenon, which is the fact that each object has atoms that vibrate at different natural frequencies, and thus absorb different parts of the light’s spectrum. So it just clicked that the brain basically perceives the natural frequency of the atoms of each object, and it assign a unique value (color) to each frequency of vibration
I'm interested in the filter and hallucinations. In particular the sense of proprioception. My proprioception gets messed up when falling asleep, having a fever, or having a panic attack. It can feel for eg like my limbs are jumbled (arms connected to my hips for eg), like they are too large/small/light, or that they are in a completely different location to where they are (like feeling that my legs are pointed straight down in bed when they're actually bent at the knee). I don't see it, I feel it and the feeling is proprioceptive not that I can feel pressure against my skin etc. I'm not worried about it as it's always been this way but I've always wondered what's going on with that. If it's a filtering issue then what happens to your filter during a panic attack? And why does proprioception require a filter?
To build on this regarding psychedelics and hallucinations: much of the higher order visual representations (ie, shapes, objects, faces, etc) are stored endogenously in higher order/accessory visual cortical regions, which then send those representations back to primary visual cortex. So, in primary visual cortex there is always a kind of balance between external sensory stimuli and corresponding endogenous visual representations/symbols. The serotonin system (via 5HT2x receptors) is apparently important for modulating the 'gain' of this endogenous higher order pathway relative to the external sensory pathway, and a hypothesized mechanism of visual psychedelic hallucinations is amplification of this endogenous feedback pathway via stimulation of 5HT2a receptors (Schartner and Timmerman 2020). Which explains why smaller doses of psychedelics result in amplification of certain features of objects, and larger doses result in progressively more generation of features that don't actually exist, because they're actually being completely generated in the higher order visual pathway.
Here’s a question for you I have binocular diplopia, which is to say, my eyeballs don’t point in the same direction. I have corrective prismatic lenses that adjust the incoming signal to prevent double vision. I know that what I’m seeing when I double isn’t real, my brain knows it isn’t real, why do I still see a doubled image, even when I obviously know that it’s not what I should be seeing and not an accurate reflection of what’s in front of me?
really good explanation. i'm currently taking mental health in nursing school where we talk a lot about different types of hallucinations. we have visual, auditory, tactile, even olfactory. the brain is an amazing thing. but, yes, if you are having auditory hallucinations you are essentially inorganically activating the occipital region of the brain and, therefore, your brain is telling your eyes you are seeing something when, in actuality, your eyes are not really seeing anything
I enjoyed reading this, thank you for posting it.
I have always been fascinated about these concepts in the context of psychedelic drugs. For example the complex fractal patterns that appear on walls, or the intricate geometric shapes that objects rearrange themselves into, or even fluffy pink cotton candy clouds that fill the room obscuring everything. Back in my own days of being something of a 'psychonaut' I experienced many things that truly showcased the amazing power of the human brain I would love to understand the exact process through which these indescribably complex hallucinations are experienced.
So is this kind of a long way of saying that the experience of seeing real things and seeing illusions is essentially the same thing? As far as we can tell, the brain processes are the same?
I’m curious if you have any opinion or have heard about Donald Hoffman and his team’s evolutionary game theoretical simulations. He claims to have proven a theorem that says our senses destroy information about the true structure of reality. The basic claim is that natural selection tuned our senses to fitness payoffs (feeding, fighting, fleeing, and. mating the four F’s) and that fitness payoffs functions do not preserve any information about the structure of reality. He basically thinks that reality is a 3D user interface designed to give us information that is relevant to fitness and that objective reality is actually nothing like space/time and objects.
Good job. A well written response. Thank you.
That was interesting. Thanks!
Thank you for this! You've actually answered a question that I've been asking since my single digits (how do colors look outside of the human psyche)? It's good to know that they really don't look like anything that they're just made up. Thanks!
Thank you for the insight, it's very interesting and educational. How about other senses, do hearing, smell and touch work in the same way?
Colors are objectively real, or do you not know how photons work?
What you explain is in line with current neuroscience, but I want to clarify: the difference is not the direct source where our experiences come. All of our perceptions come from the same source, the model of the world our brain creates. We never see what's "actually" there, we just see a model our brain creates to be as accurate an represenation of the world as it can. The difference is in what parts of the model the consciousness gets access to and what the brain "tells" the consciousness about them.
When we see something visually, the brain corrects the model using the sensory information, and the consciousness experiences the corrected model, labeled as the corrected model. We can be somewhat sure it corresponds to the reality as well, because what we see corresponds more to what information our other senses bring in, and more to the other people's models.
This Robot Messes with Your Brain Until You Feel a “Ghostly Presence”
For most of us, it&rsquos an uncomfortable tingling sensation&mdashthat occasional, disturbing feeling that someone is behind us, watching. But for millions of people around the world who suffer from visual and auditory hallucinations, this minor annoyance develops into a frequent torment.
Feeling of Presence, or FoP, is the disconcerting notion that someone else is hovering nearby, walking alongside you or even touching you. It&rsquos the stuff of ghost stories, but also a real symptom of several neurologic conditions, including schizophrenia and Alzheimer&rsquos disease. Scientists know so little about the underlying causes of FoP that long-term treatments and cures remain illusive.
Now, researchers are chipping away at the neurobiology behind that uncanny feeling. In a paper published November 6 in Current Biology, a team of scientists described how they used a custom-built robot to induce an eerie Feeling of Presence in healthy participants. Their findings confirm that sensorimotor conflict, a neurologic imbalance between what the mind perceives and what the body feels, lies at the root of some FoP illusions.
&ldquoYou really need a sensorimotor mismatch for Feeling of Presence,&rdquo says Giulio Rognini, a doctoral candidate at the Ecole Polytechnique Federale de Lausanne in Switzerland, and coauthor on the paper. &ldquoThis asynchronous condition makes the subjects more prone to that feeling of somebody behind them.&rdquo
The scientists began by examining 12 patients who had reported Feelings of Presence in the past. Virtually all of the subjects described similar hallucinations&mdashthe distinct feeling that somebody was directly behind them and uncomfortably close. Brain scans also revealed lesions that dotted the subjects&rsquo frontoparietal cortexes, an area of the brain associated with awareness of &ldquoself&rdquo and integration of sensorimotor signals.
Disturbing hallucinations followed them like shadows, at times. &ldquoWhen the patient was standing, the presence was standing,&rdquo Rognini says. &ldquoWhen the patient was sitting, the presence was sitting. When the patient was moving, the presence was moving.&rdquo
Based on these common brain lesions and experiences, the researchers suspected that it was not just any disharmony between movement and sensation that might trigger FoP, but a specific disharmony, perfectly coordinated with the body&rsquos movements. To test this theory in healthy participants, they designed a robot that could precisely match each subject&rsquos motions, while still delivering a confusing, mismatched sensation.
When a healthy, blindfolded subject would reach forward, the robot would copy his or her exact motion and tap the participant from behind. If the robot-human interaction was perfectly synchronized, the participants simply reported feeling like they were reaching forward and touching their own backs&mdasha disturbing, but not hallucinatory, sensation.
But when the scientists delayed the robotic reaction even slightly, by a half-second, the participants become disoriented. Many said that they felt like someone else was touching their backs, and estimated that the mostly-empty room was full of people. Some added that they felt as though they were drifting backward, toward The Presence two participants were so disturbed that they asked to stop the experiment.
&ldquoWhat we are doing is manipulating those sensorimotor signals,&rdquo Rognini says. &ldquoWe found that FoP is only experienced when there is a delay between what participants do and a feeling on their backs.&rdquo
Rognini suspects that the findings may have broad implications for neurologic and psychiatric patients who suffer from auditory and visual hallucinations. &ldquoIf you are able to perturb the system, to induce a Feeling of Presence, maybe you could also tune the sensorimotor system to down-regulate those symptoms,&rdquo he says.
Nonetheless, there is a major difference between causing a hallucination and curing it. &ldquoInducing a phenomenon is relatively easy&mdashto control it, and stop it temporarily by electrical stimulation of the brain, is also relatively easy,&rdquo says Sophia Frangou, chief of the Psychosis Research Program at Mount Sinai&rsquos Icahn School of Medicine. &ldquoThe problem we have is making sure that this phenomenon doesn&rsquot return.&ldquo
Still, Frangou was impressed by the researchers&rsquo willingness to explore the connection between structure and function in the brain, and to apply that to studying the practical symptoms of psychosis. &ldquoIt&rsquos really an innovative idea in probing the brain in ways that can be meaningful,&rdquo she says. &ldquoIn that respect, I can see an immediate relevance.&rdquo
Judith Ford, a neuroscientist at the University of California San Francisco, who specializes in schizophrenia, added that the study might help researchers pin down the mechanism behind Feelings of Presence. &ldquoThis kind of study is essential to our efforts to understand the neurobiological basis of such symptoms,&rdquo she said in a prepared statement.
For Rognini and his team, the next step will be attempting to design a robot that might work with schizophrenic patients to help them distinguish between their own actions and someone else&rsquos actions.
&ldquoThat would be the dream,&rdquo he says. &ldquoTo, by some robotic simulation, down-regulate psychotic symptoms.&rdquo
Imagination, reality flow in opposite directions in the brain
As real as that daydream may seem, its path through your brain runs opposite reality.
Aiming to discern discrete neural circuits, researchers at the University of Wisconsin-Madison have tracked electrical activity in the brains of people who alternately imagined scenes or watched videos.
"A really important problem in brain research is understanding how different parts of the brain are functionally connected. What areas are interacting? What is the direction of communication?" says Barry Van Veen, a UW-Madison professor of electrical and computer engineering. "We know that the brain does not function as a set of independent areas, but as a network of specialized areas that collaborate."
Van Veen, along with Giulio Tononi, a UW-Madison psychiatry professor and neuroscientist, Daniela Dentico, a scientist at UW-Madison's Waisman Center, and collaborators from the University of Liege in Belgium, published results recently in the journal NeuroImage. Their work could lead to the development of new tools to help Tononi untangle what happens in the brain during sleep and dreaming, while Van Veen hopes to apply the study's new methods to understand how the brain uses networks to encode short-term memory.
During imagination, the researchers found an increase in the flow of information from the parietal lobe of the brain to the occipital lobe -- from a higher-order region that combines inputs from several of the senses out to a lower-order region.
In contrast, visual information taken in by the eyes tends to flow from the occipital lobe -- which makes up much of the brain's visual cortex -- "up" to the parietal lobe.
"There seems to be a lot in our brains and animal brains that is directional, that neural signals move in a particular direction, then stop, and start somewhere else," says. "I think this is really a new theme that had not been explored."
The researchers approached the study as an opportunity to test the power of electroencephalography (EEG) -- which uses sensors on the scalp to measure underlying electrical activity -- to discriminate between different parts of the brain's network.
Brains are rarely quiet, though, and EEG tends to record plenty of activity not necessarily related to a particular process researchers want to study.
To zero in on a set of target circuits, the researchers asked their subjects to watch short video clips before trying to replay the action from memory in their heads. Others were asked to imagine traveling on a magic bicycle -- focusing on the details of shapes, colors and textures -- before watching a short video of silent nature scenes.
Using an algorithm Van Veen developed to parse the detailed EEG data, the researchers were able to compile strong evidence of the directional flow of information.
"We were very interested in seeing if our signal-processing methods were sensitive enough to discriminate between these conditions," says Van Veen, whose work is supported by the National Institute of Biomedical Imaging and Bioengineering. "These types of demonstrations are important for gaining confidence in new tools."
Landmark Study Identifies Key Brain Difference In Autism
Neuroscientists at Harvard and MIT have identified, for the first time, a link between the activity of the neurotransmitter GABA and symptoms of autism -- a finding that may pave the way for new methods of treating and diagnosing the disorder.
“This is the first connection in humans between a neurotransmitter in the brain and an autistic behavioral symptom,” Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and the study's lead author, said in a statement.
The role of the GABA neurotransmitter is to inhibit brain cells from firing in response to signals received from the external environment -- or as Robertson told The Huffington Post, to curb "runaway excitation" in the brain.
"GABA is responsible for signaling that neurons should turn off, or stop firing," Robertson told HuffPost. "It tends to come into play . when information is being transmitted and it needs to be shut down or filtered out."
Scientists have speculated that a lack of GABA inhibition to overexcited neurons could be the underlying cause of the hypersensitivity to sensory input seen in individuals with autism.
"It's necessary to filter out signals in the external world that aren't relevant to the task at hand," Robertson said. "GABA helps us in this kind of inhibition."
Hypersensitivity to one's external environment makes it difficult for individuals with autism to tune out distracting sounds and sensations, and can make them feel overwhelmed in loud or highly stimulating situations. For instance, some children with autism tend to be easily distracted by sensations like the feel of an itchy sweater or by ambient noise in the background of a conversation. Hypersensitivity also plays a role in the main symptoms of the disorder, including impaired social skills, communication difficulties and repetitive behaviors.
So, it may be that when GABA doesn't do it's job, it's more difficult for the brain to tune out environmental distractions.
Scientists had suspected this, but had never tested the hypothesis on humans. Previous studies had linked reduced GABA activity with autism-like behaviors in animals, but no such correlation had been established in people.
For the study, the researchers asked a group of participants -- half of whom had autism, and half of whom did not -- to complete a visual task that required brain inhibition. Completing the task hinges on the ability to switch between visual input from the right and left eyes.
The results showed that adults with autism switched back and forth only half as much as those without autism, and they were significantly less able to suppress one image in order to focus on the other.
While the participants completed the task, the researchers measured GABA activity in their brains. Among non-autistic participants, those with higher GABA levels were better able to suppress the non-dominant image. But among those with autism, there was no relationship between performance on the task and GABA levels -- suggesting that in the case of autism, GABA is "not doing its job," Robertson said.
"It's not as simple as GABA is missing in the autistic brain," Robertson explained. "It's not in lower concentration, it's just not relating to visual perception."
The researchers aren't yet sure of the cause for this dysfunction. "A lot more work needs to be done," she said.
More research is needed to determine whether increasing GABA activity could improve symptoms of autism, but the findings are a promising start towards improving early diagnosis, treatment and perhaps even prevention of autism. In addition to opening up the possibility of new drugs that target GABA pathways, clinicians might also one day be able to examine GABA activity in early screenings for autism.
"It'll be a longer story than just, Aha! We'll make some GABA-enhancing drugs and cure autism," Robertson said. "But it does point to a pathway that seems to be dysfunctional in the autistic brain."
Other recent brain-imaging studies have found differences in functional connectivity between autistic and non-autistic brains, and have also linked impaired brain activity with the inability to regulate emotions in individuals with autism.
The findings were published online Dec. 17 in the journal Current Biology.
Imaging of connectivity in the synaesthetic brain
In the 1880s, Francis Galton described a condition in which “persons…almost invariably think of numerals in visual imagery.” This “peculiar habit of mind” that Galton was describing is today called synaesthesia, a condition in which stimuli of one sensory modality elicit sensations in another of the senses.
There are several different kinds of synaesthesia, which is now known to be far more common than was previously thought. Galton was describing a specific type of the condition, called grapheme-colour synaesthesia, in which printed numbers or letters elicit the sensation of specific colours. The Nobel Prize-winning physicist Richard Feynman was a grapheme-colour synaesthete he reported seeing equations in colour. But there are other forms of the condition muscial tones elicit the experience of colours the expressionist artist Wassily Kandinsky, for example, was a tone-colour synaesthete, in whom musical tones elicited specific colours. Kandinsky used his synaesthesia to inform the artisitc process – he tried to capture on canvass the visual equivalent of a symphony.
There are a number of theories which seek to explain synaesthesia in terms of neurobiological mechanisms. One of them holds that synaesthetes have unusual connections between different sensory regions of the cerebral cortex, perhaps because a failure to prune improper, under-used or redundant synaptic connections during development of the nervous system. Thus, stimuli entering one sensory system will generate activity in another sensory system, and activity in the latter system will also evoke sensations in the synaesthete, despite an absence of the appropriate stimuli for that modality.
According to the other, the differences between the brains of synaesthetes and non-synaesthetes are functional and not anatomical. This theory holds that synaesthesia occurs because of impaired inhibition between the cortical circuits involved, such that there is abnormal feedback to a region of the brain involved in processing colour information (area v4 in the inferior temporal gyrus) from a region at the junction of the temporal, parietal and occipital lobes that processes information from multiple sensory modalities. Thus, disinhibition of the feedback to area v4 leads to inappropriate perceptions of colour.
Previous studies have provided indirect support for the first theory, and suggest a mechanism underlying grapheme-colour synaesthesia. Neuroimaging has shown that two regions in the fusiform gyrus of the temporal lobe become active when grapheme-colour synaesthetes perceive letters that evoke sensations of colour: area v4 in the inferior temporal cortex and the region immediately anterior (in front) to it, which is known to be involved in the perception of word shape. This co-activation of area v4 and the adjacent region suggest that an inappropriate interaction between these two parts of the brain may underly the extraordinary sensory experiences that occur in grapheme-colour synaesthesia.
Now, Romle Rouw and Steven Scholte of the University of Amsterdam provide direct evidence for the first hypothesis. They used a technique called diffusion tensor imaging (DTI) to compare the connectivity of the brain in grapheme-colour synaesthetes and in non-synaesthetes during viewing letters and numbers that evoked sensations of colour. DTI is a type of magnetic resonance imaging (fMRI) that measures the diffusion of water molecules. In the brain, water diffuses randomly, but tends to diffuse easier along the axons that are wrapped in myelin, the fatty protein that insulates nerve fibres. Diffusion tensor imaging can therefore be used to infer the size and direction of the bundles (or “fascicles”) of white matter tracts that connect different regions of the brain (above). The Dutch researchers show that synaesthetes have more connections between the two adjacent areas in the fusiform gyrus than non-synaesthetes. They report their findings in the June issue of Nature Neuroscience.
As well as showing these differences between synaesthetes and non-synaesthetes, the authors also show that there are also differences in connectivity between synaesthetes who differ in the intensity of their sense-mixing experiences. Some grapheme synaesthetes (called “projectors”) actually see the colours associated with the letters or numbers, while others (called “associators”), only experience mental representations of the colours. Rouw and Scholte show that projectors have more connections between area v4 and the fusiform gyrus than associators. However, they are unsure whether this increased connectivity between the two regions of the fusiform gyrus are the result of wider axons, increased myelination, or a greater number of axons.
For a more detailed discussion of this study, and of synaesthesia in general, see this post at Madam Fathom.
Rouw, R. & Scholte, H. S. (2007). Increased structural connectivity in grapheme-color synesthesia. Nat. Neurosci. 10: 792-797. [Abstract]
Galton, F. (1881). Visualising numerals. J. Anthrop. Inst. 10: 85-102. [Full text]
Where Is the Supratentorial Area of the Brain?
The supratentorial region of the brain is located in the upper half of the brain. It is composed of the cerebrum and the diencephalon. The region of the brain below the supratentorial region is called the infratetorium, which is composed of the cerebellum and brainstem.
One of the primary structures of the supratentorial region, the cerebrum, is divided into two hemispheres. These hemispheres are split by fissures into four separate lobes, which are responsible for different tasks in the brain. The frontal lobe is responsible for logic and planning, and a small subdivision of the frontal lobe, called Brocha's area, is responsible for language development. The frontal lobe is also contains an individual's intelligence and personality.
The parietal lobes interpret physical sensations, such as temperature and pressure, and also allow humans to perform arithmetic. The temporal lobes help with understanding language and sound, as well as short-term memory. The occipital lobe contains the visual center of the brain. The right occipital lobe receives visual data from the left eye, while the left lobe interprets data from the right eye. The other structure, the diencephalon, is located in the center of the brain and contains three of the brain's most important structures – the thalamus, the hypothalamus and the pituitary gland – which handle tasks such as relaying information to other parts of the brain and controlling hormones related to sexual development, thyroid production and growth.
What visual perception tells us about mind and brain
Recent studies of visual perception have begun to reveal the connection between neuronal activity in the brain and conscious visual experience. Transcranial magnetic stimulation of the human occipital lobe disrupts the normal perception of objects in ways suggesting that important aspects of visual perception are based on activity in early visual cortical areas. Recordings made with microelectrodes in animals suggest that the perception of the lightness and depth of visual surfaces develops through computations performed across multiple brain areas. Activity in earlier areas is more tightly correlated with the physical properties of objects whereas neurons in later areas respond in a manner more similar to visual perception.
Neuroscience research over the past 40 years has revealed that there are roughly 30 different visual areas in the primate brain, and that within these areas there are parallel streams of processing and distinct modules (1, 2). But how is neuronal activity in the different areas related to our conscious visual perception? How can our unitary visual experience be based on neural activity spread across distinct streams of processing in multiple brain areas? The answers to these questions have profound implications for our understanding of the relationship between mind and brain. Whereas earlier pioneering work focused on the delineation of visual areas in the brain and the neurons' basic response properties, recent research attempts to expose the roles different areas play in perception and the extent to which there are hierarchies of visual computations.
Conscious visual experience is thought to be based on activity in visual areas of cerebral cortex, which receive input from the retina. Early cortical structures are organized topographically with regard to the visual world. This topography can be exploited to investigate the role of different visual areas in perception. For example, neuronal activity in visual cortex can be locally blocked by transcranial magnetic stimulation (TMS) and the effect on visual perception in the corresponding portion of the visual field can be assessed. Kamitani and Shimojo (3) briefly (40–80 ms) presented a large grid pattern to human observers, and after a delay of 80–170 ms, a single pulse of TMS was given to the occipital lobe. The TMS caused the observers to perceive a disk-shaped patch of homogeneous color in the visual field on the opposite side from the side of the brain given TMS (TMS-induced scotoma). When the visual stimulus was a grating composed of parallel lines rather than a rectilinear grid, the scotoma was distorted and appeared to be an ellipse with its short axis along the contours. This contour-dependent distortion appeared to reflect long-range interactions between neurons selectively responsive to similar orientations (4). Interestingly, the color perceived inside the scotoma was consistent with that of the background, which was presented after, not before, the grid or grating. Thus there appears to be filling-in backward in time to compensate for the local information blocked by the TMS. This is just one example from a large body of evidence suggesting that neural activity in early visual cortex is necessary for conscious experience of perception, and that neuronal connections and interactions at these levels are reflected in the content of perception.
Perception is actually much more complex than a simple topographical representation of the visual world. Its primary goal is to recover the features of external objects—a process termed unconscious inference by von Helmholtz (5, 6). What we see is actually more than what is imaged on the retina. For example, we perceive a three-dimensional world full of objects despite the fact that there is a simple two-dimensional image on each retina. In general, a particular retinal image may correspond to more than one object. For example, a circular patch of light on the retina could result from viewing a cylinder on end or a round ball from any perspective. Thus perception is inevitably an ambiguity-solving process. The perceptual system generally reaches the most plausible global interpretation of the retinal input by integrating local cues, as will be illustrated in the case of lightness perception next.
Black-and-white photographs make it clear that lightness alone conveys a great deal of information. The perception of lightness is far from a “pixel-by-pixel” representation of the light level on the retina. It is actually strongly influenced by context. Thus a gray piece of paper appears darker if it is surrounded by white than black (Fig. 1A). Although this deviation of lightness perception from physical reality might appear to be a case of a perceptual error, the spatial interactions underlying it may have an important perceptual purpose. We perceive surface lightness to be constant across surprisingly large changes in ambient illumination, a phenomenon called lightness constancy. In this example, as in other cases of perceptual constancy, the lighting and viewing conditions affect the retinal image of objects, and extensive spatial integration and normalization are performed to recover the constant attributes of the objects themselves.
(A) Lightness induction. The small gray squares are identical but the one surrounded by black appears lighter than the square surrounded by white. (B) The response of a V1 neuron to a lightness induction stimulus. The receptive field of the neuron was centered on a uniform gray square. The luminance of the surrounding area was sinusoidally modulated. The cell's response was synchronized to the surround modulation and correlated with the perceived lightness of the central patch, even though nothing changed within the receptive field. [Reproduced with permission from ref. 14 (Copyright 2001, National Academy of Sciences).]
At what point in the visual pathway from retina to the many cortical visual areas does the neural activity correlate with what we perceive? Do neurons in the retina, primary visual cortex (V1), and higher-level cortical areas contribute to perception equally? Or instead, does perception have a specific locus in the brain? To tackle these questions, Paradiso and coworkers (7, 8) assess the computations neurons perform in different visual areas and the extent to which neural responses correlate with either the physical or perceptual attributes of objects. They found that responses of neurons in the retina and visual thalamus depend on light level but they do not correlate with perceived lightness. These neurons appear to primarily encode information about the location of contours in the visual scene. Only in V1 were cells found that had responses correlated with perceived lightness (Fig. 1B). They also found that the average response of neurons in V1 is lightness constant. Thus the response of the neurons is relatively immune to changes in overall illumination—a property without which lightness would be of little behavioral value. These findings suggest that lightness information is first explicitly represented in visual cortex and that responses correlated with visual perception build in stages across multiple visual areas. The results combined with findings from other labs suggest that early visual processing focuses on the extraction of object contours, secondary processing stages are involved with the computation of lightness and later processing assigns color to objects.
As mentioned previously, the visual system has the difficult task of understanding a complex three-dimensional world from two-dimensional images on each retina. Images of objects at a distance other than at the fixation plane are projected to different relative positions on the two retinas. The relative position difference, called binocular disparity, provides an important cue for the brain's computation of distance. However, there is much more to distance perception than the interpretation of binocular disparity. Consider a retinal image of a cross with crossed disparities (disparities that lead to perception of objects closer than the plane of fixation) added to the ends of the horizontal arms. Because of the disparities, the vertical edges of the horizontal arms can be unambiguously determined as being closer to the observer, whereas the depth of the horizontal edges remains ambiguous because there is no fixed disparity between the two retinal images. Two different three-dimensional objects are equally consistent with the retinal image: a horizontal bar in front of a vertical bar and a cross with horizontal arms bent forward. However, humans and monkeys almost always perceive the former (9, 10). The brain selects one interpretation among the possible surface structures.
The inferior temporal cortex (IT) represents the final stage of the visual pathway crucial for object recognition. Neurons in IT respond to shape, color, or texture. Recent studies show that many IT neurons also convey information on disparity (11) and disparity gradients (12). These findings lead to a new view that IT is involved in some aspects of depth perception. Indeed, the activity of some IT neurons encodes information on the relative depth order of surfaces rather than the local absolute disparity cues of the stimulus. For example, a population of IT neurons responds more strongly to a horizontal bar in front of a vertical bar than to a vertical bar in front of a horizontal bar, regardless of whether crossed or uncrossed disparities are added (Fig. 2). Other cells prefer different surface structures. This behavior of IT neurons is in contrast to that of disparity-selective V1 neurons that respond to local absolute disparity (13). Thus, the pathway from V1 to IT transforms information about binocular disparity that is based on the optics of the eye into a perceptually relevant representation of information about surface structure.
(A) The relationship between disparity type and location and surface depth order perceived. Responses of IT neurons to these four stimuli were tested to determine whether their activity correlates with the perceived surface structure or with the type of disparity.
The studies of lightness perception and depth perception lead to a similar conclusion about the relationship between brain activity and conscious visual perception. Rather than being based on neural activity in one special area, visual perception involves progressive computations spread across multiple brain areas. Both early areas, as in the TMS study, and later areas, as in the study of area IT, are involved in perception. The visual system masterfully recovers information about the objects in our environment based partly on processes of integration and normalization and partly on hard-wired probabilities of what objects are most likely to result from particular retinal images.