Information

How are the images of both eyes combined to form one image?

How are the images of both eyes combined to form one image?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

When you see an object by one eye, ex.right eye, you see it from the right direction and the opposite if you see it by your left eye, but when you open both eyes, the image appears somehow centric in between.

How does the brain combine both images to form the final image?


The visual system contain several stages in which the information from both eyes is progressively mixed and analyzed, starting in the lateral geniculate nucleus.

Even if you feel seeing just one merged object with both eyes, your brain is using both images to calculate the 3D position of that object.

One fun fact: The image that you feel centric, it is actually bias towards one of your eyes, the called "dominant eye". You can check what is yours by a simple experiment


Arthropod Eye & Vision

Arthropods possess simple as well as compound eyes the latter evolved in Arthropods and are found in no other group of animals. Insects that possess both types of eyes are considered to be the most successful animals on earth.

VISION IN CRUSTACEA (Prawn) – The Compound Eye

Crustacea includes prawns, crabs, lobsters, shrimps, barnacles, water fleas etc., which possess a pair of compound eyes for vision.

Prawn possesses a pair of large stalked hemispherical eyes on the anterior side of cephalothorax below the rostrum. Each eye is composed of a large number of independent visual units called ommatidia which are connected to the optic nerve. An ommatidium is divisible into the outer dioptrical region for receiving and focusing light rays and the inner sensory region for perceiving light and sending the nerve impulse to the brain, which analyses the impulses as image of the object.

The cuticle on the surface is modified as cornea over the ommatidia and gives the eye necessary protection and also allows the light rays to enter the eye. Below the cornea, a pair of corneagen cells secretes fresh cornea in case of wear and tear. A lens-like crystalline cone is located beneath the corneagen cells and serves to focus light rays inwards. The crystalline cone is surrounded by four cone cells or Vitrellae that serve to provide nourishment to the cone.

Next layer is of sensory cells called Rhabdomes which are elongated and transversely striated and are sensory in function. Seven retinalcells that surround the rhabdome and encircle it provide it nutrition and protection. Chromatophores are pigment cells which are responsible for separating one ommatidium from the other so that they remain as independent units. They are located around the cone cells and retinal cells and can shrink or expand to increase or decrease the intensity of light entering the eye.

THE MOSAIC VISION

The compound eye is incapable of giving distant vision and sharp vision but is efficient in picking up motion and in providing 360° view, as it is large globular and mounted on a movable stalk. Each ommatidium is capable of producing an independent image of a small part of the object seen and not the entire object. All these small images are combined in the brain to form a complete image of the object that is made of small dots or mosaic of dots and hence it is called mosaic vision. The range of the compound eye is not more than a foot and hence no single ommatidium can perceive the entire object. Movement of the objects can be detected much more efficiently by the compound eye because as the object passes in front of the eye, the ommatidia switch on and off according to their location in relation to the object . This characteristic of the compound eye helps the animal in detecting the movement of the predators and escape before the latter can strike.

Another characteristic feature of the compound eye is its high flicker fusion rate, which means it can perceive action as successive independent frames of images and not as a continuous motion. The flicker fusion rate of the compound eye is about 50 frames per second as compared to 12-15 frames per second of human eye. By perceiving motion the compound eye helps arthropods to escape from predators.

The Apposition Image

This is perceived in bright light, when pigment cells in the dioptrical and sensory regions spread and completely separate the ommatidia from each other, so that the angle of vision of an ommatidium is only 1 degree and light rays coming directly from the front can only enter the ommatidium, whereas the light rays coming at an angle are absorbed by the pigment before they can reach rhabdomes. The image formed in brain is a mosaic of several dots, each one of which is formed by an ommatidium. Each ommatidium uses only a tiny portion of the total field of vision and then in brain these tiny images are grouped together to form a single image of the object. Since each dot is clearly separated from the other, it is called mosaic or apposition image. The sharpness of the image depends on the number of ommatidia and their isolation from one another.

The Superposition Image

This type of vision occurs in dim light in nocturnal arthropods. The pigment cells shrink to allow more light into the eye, so that the ommatidia no longer remain optically isolated from one another, enabling even oblique light rays to strike one or more ommatidia. This results in overlapping of the adjacent blotches of images formed by different ommatidia. This is called superposition image because overlapping images are formed in the brain. This image is not sharp but hazy because of overlapping images.

VISION IN ARACHNIDA (Scorpion) – The simple eye

Scorpion belongs to the class Arachnida and possesses only simple eyes. It has a pair of large median indirect eyes and three pairs of lateral direct eyes which function in different ways in different situations.

The Median Indirect eyes: The median eyes are large convex and covered with the thick cuticle that forms cornea or lens. The hypodermis forms a thick vitreous body that nourishes the lens. The sensory rhabdomes point backwards towards the reflecting layer called tapetum. The rhabdomes are surrounded by many sensory retinal cells which transmit nerve impulses to the optic nerve and then to the brain. Median eyes of scorpion are used for vision in the night or in dark places because the dim light entering the eye is reflected by the tapetum to strike the rhabdomes again to form vision.

The Lateral direct eyes: Lateral eyes are small in size, 3 pairs and located on the lateral sides of prosoma. This eye is covered externally by a biconvex lens formed from the transparent cuticle. The epidermis forms a thinner vitreous body under the lens. Inside the eye cup are several rhabdomes which point directly towards the source of light as the tapetum is absent in these eyes. Each rhabdome is connected on the posterior end to a sensory retinal cell that is connected to the nerve. The lateral eyes are used to provide vision in day time or in bright light.

VISION IN INSECTA (Cockroach) – Simple as well as Compound Eyes

Insects possess one pair of compound eyes and 1-3 simple eyes or ocelli on top of the head. In cockroach the ocelli are rudimentary.

The Insect Compound Eyes: The compound eyes are sessile in the form of convex brownish-black, kidney-shaped structures on the lateral sides of head. Each eye contains about 2,000 ommatidia, similar in structure to those already described earlier.

The pigments separating ommatidia are not retractable in the eyes of cockroach since the animal is nocturnal and spends daytime in dark places. But the eye produces mosaic vision similar to the crustaceans. Compound eyes are specially adapted to perceive movements of objects. The insect compound eye is advanced structure because the number of ommatidia in insect eyes increases giving the eye sharpness of vision. Also the distance of vision increases in predatory insects and fast flying insects.

The Insect Ocelli. Ocelli are simple eyes, more or less similar to the simple eyes of arachnids and provide the eye with distant vision. Ocelli also give nocturnal vision to night flying insects, which find their way by aligning them at an angle with the moon or stars. By possessing both types of eyes, insects enjoy both types of visions, namely detection of movement with compound eyes and distant vision with simple eyes or ocelli.


Terms and Concepts

  • Cone cell
  • High-acuity vision
  • Fovea
  • Stimulation
  • Complementary colors
  • Primary colors
  • Additive color mixing
  • Subtractive color mixing
  • Cone cell fatigue
  • Afterimage
  • Retina

Questions

  • How many cone cells are there in 1 square mm in the fovea? How many cone cells are in 1 square mm at the edges of the retina?
  • How are cone cells spread over the retina?
  • Cone cells are sometimes described as being sensitive to short-, medium-, or long-wavelength light. Which wavelengths correspond to red, green, and blue light?
  • What causes color blindness?
  • What parts of the brain receive signals from the cone cells?

The rods and cones are the site of transduction of light to a neural signal. Both rods and cones contain photopigments. In vertebrates, the main photopigment, rhodopsin, has two main parts Figure 17.19): an opsin, which is a membrane protein (in the form of a cluster of α-helices that span the membrane), and retinal—a molecule that absorbs light. When light hits a photoreceptor, it causes a shape change in the retinal, altering its structure from a bent ( cis ) form of the molecule to its linear ( trans ) isomer. This isomerization of retinal activates the rhodopsin, starting a cascade of events that ends with the closing of Na + channels in the membrane of the photoreceptor. Thus, unlike most other sensory neurons (which become depolarized by exposure to a stimulus) visual receptors become hyperpolarized and thus driven away from threshold (Figure 17.20).

Figure 17.19. (a) Rhodopsin, the photoreceptor in vertebrates, has two parts: the trans-membrane protein opsin, and retinal. When light strikes retinal, it changes shape from (b) a cis to a trans form. The signal is passed to a G-protein called transducin, triggering a series of downstream events.

Figure 17.20.
When light strikes rhodopsin, the G-protein transducin is activated, which in turn activates phosphodiesterase. Phosphodiesterase converts cGMP to GMP, thereby closing sodium channels. As a result, the membrane becomes hyperpolarized. The hyperpolarized membrane does not release glutamate to the bipolar cell.

The World’s Most Amazing Camera: Part 2

I recently had the opportunity to ride in one of the newer Tesla electric cars. The dashboard had a single touchscreen which displayed a perspective view of the vehicle itself – as if seen by a bird following the vehicle by 300 feet at about 100 feet in altitude. The screen also displayed grayscale model representations of all surrounding vehicles, along with the markings on the road, the speed limit, nearby traffic lights, and other driving information. In principle, you could drive the car without ever looking out the window by looking only at the screen. Apparently, cameras surrounding the vehicle feed images into a computer which constructs a 3D virtual model of its environment, which is then displayed on the screen. In other words, it does in a very limited way what your eyes and brain do with much higher fidelity every second of your conscious life.

The ability of the human mind to construct a 3D representation of your environment based on the information sent via the optic nerve from two eyes to the brain is truly remarkable. Each part must function as designed in order for the system to work. Human vision is therefore an irreducibly complex system and cannot have evolved in a neo-Darwinian fashion. The eye was designed by God (Proverbs 20:12 Psalm 94:9).

Anatomy and Physiology

The human eye is a roughly spherical chamber filled with a transparent gel-like substance called vitreous humor. A white outer layer called the sclera surrounds the eyeball except for a small region in the front of the eye, where the transparent cornea bulges out beyond the sphere. Directly behind the cornea is the anterior chamber which is filled with a clear fluid called aqueous humor. Behind the aqueous humor are the iris and lens. The iris is the colored part of the eye which surrounds the dark pupil, and acts as an aperture adjustment which can rapidly change the size of the pupil. The lens is gel-like and can change its shape so that light from external sources comes to focus on the retina – the light sensitive surface on the back interior of the eye.

An Overview of the Retina

The retina has two types of light-detecting cells called rods and cones. These names are due to the shape of these cells. Rods are relatively long and shaped like a cylinder. Cones are shorter than rods and are indeed cone-shaped.

There are three different types of cones, each of which is sensitive to a particular range of light-frequencies. One type of cones is maximally sensitive to blue light, another to green light, and the third to red light. These three types of cones send electro-chemical signals to surrounding cells which combine the information into a signal sent to the brain which we perceive as color. Cones are therefore necessary for our ability to see color. They are primarily what we use when we read. The human eye contains about six million cones.

Rods cannot detect color because they all are equally sensitive to the same color spectrum. But they have other purposes. Rods can detect shapes and (due to the way they are connected to other cells) motion. Rods are distributed in the retina somewhat more evenly than cones, rather than being concentrated near the center of vision. So, rods are very useful in our peripheral vision. Rods also have a greater capacity to detect faint light than cones. Therefore, rods are very useful at night or in other dark conditions. The human eye contains about 120 million rods.

With a manmade camera, you generally need to choose whether you want to zoom-in on something in order to see it in great detail, or to zoom-out in order to capture a more panoramic view. The human eye basically does both simultaneously. But how can it do this without overwhelming the brain with information? The answer has to do with the way rods and cones are distributed.

The cones in particular are highly concentrated near the center of our field of view. In fact, the place on the retina that marks the center of our field of vision is packed with cones and has no rods at all. This location is called the fovea. When you look directly at something, its light falls directly on the fovea. Since there are a great many cones in the fovea, we get a very detailed view of anything directly in the center of our field of view. The ability to see small details is called visual acuity.

On the other hand, we have a lower density of light-sensing cells farther from the fovea. This is why you cannot easily read text unless you are looking directly at it. If you look away from this article, you will find that you can still see the article, but cannot read the words. The visual acuity in our peripheral vision is much lower than near our center of vision. This is how our brain avoids information overload, and yet we still have some visual knowledge of our surroundings. It is an ingenious solution.

Muscles of the Eye

In order to see something in highest detail, we need to rotate both eyes so that its light falls on the fovea of each eye. This is accomplished by six muscles attached to the external surface of the eye. Four of these muscles you can control somewhat directly. They allow you to look left, right, up, down, or any combination. These four are each attached to the annulus of Zinn behind the eye on one end, and the other end is attached to the side of the eye upon which they pull. So, to look left, the muscle on the left side of the eye flexes.

The other two muscles are the superior oblique and inferior oblique. They rotate the eyes clockwise or counterclockwise so that when you tilt your head from side to side, your eyes remain upright. You can see this by looking in a mirror, and noting the positions of the small blood vessels in the sclera (white part of the eye) as you tilt your head sideways. These muscles are also necessary to aid the vertical (the up and down) muscles in the eye since the latter are not attached at an exact 90-degree angle relative to the front of the eye.

The inferior oblique muscle is attached to the lower side of the eye, on the side that is opposite the nose. It wraps underneath the eye toward the nose, and attaches at the nasal orbital wall. When flexed, it rotates the bottom part of the eye toward the nose. The superior oblique muscle is attached to the upper side of the eye also on the side opposite the nose. But its other end attaches – not to the nasal orbital wall – but to the back of the orbit (behind the eye). So how can it rotate the eye properly? In order to pull in the right direction, the muscle is directed through a pulley called the trochlea which is attached to the upper nasal wall. The trochlea redirects the force so that when the muscle is flexed, the top part of the eye is rotated toward the nose!

I challenge any evolutionist to explain how that system could possibly have evolved. The trochlea is useless without the superior oblique muscle, and the superior oblique muscle would not pull in the correct direction apart from the trochlea. Moreover, how did the muscle evolve so that it grows already passing through the trochlea? The eye is truly a marvel of design.

When you want to look directly at something in your field of view, the six muscles of the eye flex or relax in just the right way to make it happen. Within a fraction of a second, both eyes adjust so that the image of the object of interest falls directly on the fovea of each eye. Both eyes are now aimed at exactly the same location. By analogy, imagine holding two pistols, one in your right hand and one in your left hand. Imagine being so good, that you can quick draw both guns and not only hit whatever you want, but both bullets from both guns pass through the same hole! This is essentially what your eyes are doing all the time.

The image that forms in the retina of your left eye is slightly different from the image in your right eye. This is because your eyes are at slightly different locations (separated horizontally by somewhere between 50 and 70 millimeters) and therefore have slightly different views of the scene. The object you are actively looking at will naturally fall directly on the fovea of each eye. But objects surrounding it will appear shifted from one eye to the other.

To see this for yourself, hold your index finger at arm’s length and focus on it. Then close your left eye. Now open your left eye while closing your right eye. As you repeat this, you will notice that the background seems to shift left and right relative to your finger. Now focus on the background and do the same experiment. This time, the background remains stationary while your finger jumps left and right. This effect is called parallax.

Your brain is able to consolidate the information produced by the slightly different images from the two eyes and interpret the result as distance for all objects in the field of view. Whether you focus on your finger or the background, the brain immediately understands that your finger is much closer than the background. For this reason, we instantly know the approximate distance of any nearby objects. We see the world in 3D. This is stereoscopic vision.

Our ability to estimate distances by parallax works best for objects within 10 feet or so, and is still somewhat effective for distances several times as much. But for very distant objects, the difference in the two images is too small to result in an accurate depth perception. Beyond such distances, the brain uses other clues such as apparent size or apparent motion to estimate distance. This all happens without conscious thought, providing you with an instant 3D mental model of your visible surroundings.

It is possible to fool our depth perception under certain circumstances. 3D movies use special glasses to send a different image to each eye, thereby replicating the sensation of depth on a flat movie screen. For distances well beyond our stereoscopic ability, other clues must be used and can sometimes be misinterpreted. For example, when the moon is high in the sky and far separated from any other reference object, the brain tends to assume that the moon is actually quite small and therefore relatively close. But when the moon first rises, the brain readily perceives that it is far more distant since it is partially obscured by trees or houses in the distance. The brain rightly concludes that the moon is more distant than those other objects, and therefore must be enormous. Many people have noticed this “moon illusion” when the moon is near the horizon. But it is only an illusion – the angular size of the moon is no bigger than when it is high in the sky.

Upside Down

One other fascinating aspect of human vision concerns the image that forms on the retina. Whenever parallel light rays pass through a convex lens, they form an image of the source at the focal point. However, this image is upside down relative to the original. Indeed, the image that forms on the retina is upside down relative to its source. Yet, we do not see the world as upside down. Why is this?

At no point does the inverted image on the retina get flipped back to its normal, upright situation. The brain simply learns how to interpret the inverted images. The brain has learned that light falling on the lower portion of the retina comes from above. It has become second nature to us, so that we correctly interpret and respond to these inverted images.

What would happen if the images on the retina were right-side-up? Scientists examined this scenario by creating “invertoscopes,” also called “upside-down glasses.” These glasses use prisms to flip all images upside down and reversed left-right. So, when a person wears these glasses, the image of any object that falls on the retina is actually correctly oriented. When an individual first puts on these glasses, the world appears upside down. But scientists have found that after one week of wearing these glasses continuously, the brain will completely adjust and the world appears right-side up! The individual can do anything he or she could do before – even ride a bike. The ability of the human brain to adjust under these circumstances is truly remarkable. When the glasses are finally removed, the brain readjusts (much more quickly) to the inverted images.

Perhaps there is a spiritual application to this strange physiological reality. In our fallen, sinful state, we view the world incorrectly. We perceive man as being the center of reality, the judge of truth and rightness, and with God constantly on trial. But this is upside down. It is the exact opposite of reality. God is the center of reality, the Judge of all truth, and man is simply a creation who will be judged by God. The Bible is like glasses that give us the true view of the universe – the proper way to perceive reality. But the unregenerate mind has become so accustomed to looking at the world upside down, that the correct, biblical worldview seems upside down to him at first.

Sinners have simply gotten so used to viewing the world upside down that it seems normal and right. Biblical truths like “love your enemy” or “the meek shall inherit the earth” seem backwards to an upside-down heart. But the Lord can turn our hearts around to love Him and know the truth. We require the help of the Holy Spirit to transform our mind, to use it properly, and to see the world as it really is.


Difference Between Compound Eyes and Simple Eyes

Simple eyes and compound eyes are two main types of eyes found in animals, and there are many differences between each other. In order to understand whether a particular eye is a compound eye or a simple eye, it would be worthwhile to go through some information about those. This article provides some important information about the two types in summary and finally lets the reader to go through the important differences between simple and compound eyes.

What are Simple Eyes?

Although the name suggests some simplicity, the simple eyes are not simple in photosensitivity and accuracy but only in the structure. Simple eyes are found in many phyla of the animal kingdom including the vertebrates and invertebrates. There are few types of simple eyes known as Pit eyes, Spherical lens eyes, Multiple lenses, Refractive cornea, and Reflector eyes. Pit eyes are the most primitive of all types of eyes, and there is a small depression with a collection of photoreception cells. It is important to notice that pit vipers have pit eyes to sense the infrared radiation of their prey animals. The spherical lens eyes have a lens in the structure, but the focal point is usually behind the retina, causing a blur image to detect the intensity of light. Multiple lenses simple eyes are an interesting type with more than one lens in the eye, which enables them to enlarge the picture and get a sharp and focused image. Certain predators such as spiders and eagles are good examples for this type of lens arrangement. The eyes with a refractive cornea have an outer layer of a light penetrating substance, and the lens is not usually spherical, but its shape could be changed according to the focal lengths. Reflector eyes are a wonderful phenomenon that provides a common communication platform for other organisms, as well. The image formed in one’s eye is reflected onto another place so that the other organisms could see it. All these types of simple eyes function in taking the information with regard to light to sustain the body. Despite all these being simple eyes, all the higher vertebrates including humans have simple eyes.

What are Compound Eyes?

Compound eyes are formed by repeating the same basic units of photoreceptors called ommatidia. An ommatidium has a lens and photoreceptive cells mainly, and the pigment cells separate each ommatidium apart from the neighbours. However, compound eyes are capable of detecting motions as well as the polarisation of sunlight, in addition to receiving light. The insects, especially honeybees have the ability to understand the time of the day using the polarisation of the sunlight from their compound eyes. There are few types of compound eyes known as Apposition, Superposition, Parabolic suspension, and some few more kinds. The information about the images are formed through ommatidiais taken into the brain, and the whole image is combined there in order to understand the object in the apposition eyes. The superposition eyes form the image by reflecting or refracting the light received via mirrors or lenses, and then the image data are transferred into the brain, to understand the object. The parabolic suspension eyes use the principles of both apposition and superposition eyes. Most of the annelids, arthropods, and molluscs have compound eyes, and they can see colours, as well.

What is the difference between Simple Eyes and Compound Eyes?

• Compound eyes are made up of clusters of ommatidia, but simple eyes are made up of only one single unit of eye.

• Compound eyes are found in most of the arthropods, annelids and molluscs. However, simple eyes are found among many types of organisms including most of the higher vertebrates.

• Compound eyes can cover a wider angle compared to simple eyes.

• The types of simple eyes are more diversified than the compound eyes.

• The polarisation of sunlight could be understood via compound eyes, but not through simple eyes.


See Change: 2 Eyes, 1 Picture

Introduction
Is catching, juggling or heading a ball challenging for you? If you've ever tried threading a needle, did it end in frustration? Have you ever thought of blaming your eyes? Two eyes that work together help you estimate how far a ball is or where the thread is with respect to the needle. This &ldquoworking together&rdquo of the eyes actually happens in the brain. The brain receives two images (one for each eye), processes them together with the other information received and returns one image, resulting in what we &ldquosee&rdquo. Are you curious about how depth perception enters the picture? &ldquoSee&rdquo for yourself with this activity!

Background
Humans have two eyes, but we only see one image. We use our eyes in synergy (together) to gather information about our surroundings. Binocular (or two-eyed) vision has several advantages, one of which is the ability to see the world in three dimensions. We can see depth and distance because our eyes are located at two different points (about 7.5 centimeters apart) on our heads. Each eye looks at an item from a slightly different angle and registers a slightly different image on its retina (the back of the eye). The two images are sent to the brain where the information is processed. In a fraction of a second our brain brings one three-dimensional image to our awareness. The three-dimensional aspect of the image allows us to perceive width, length, depth and distance between objects. Scientists refer to this as binocular stereopsis.

Artists use binocular stereopsis to create 3-D films and images. They show each eye a slightly different image. The two images show the objects as seen from slightly different angles, as would be when you saw the object in real life. For some people, it is easy to fuse two slightly different images presented at each eye others find it harder. Their depth perception might rely more on other clues. They might find less pleasure in 3-D pictures, movies or games, and certain tasks&mdashsuch as threading a needle or playing ball&mdashmight be more difficult.

  • Three different-colored markers or highlighters that can easily stand vertically
  • Ruler
  • Table
  • Camera
  • Flat work space, such as a tabletop

Preparation

  • Place the first marker standing, vertically, 30 centimeters from the edge of the table.
  • Place the next marker 30 centimeters behind it (60 centimeters from the edge) and place the last one 30 centimeters from the second marker (90 centimeters from the edge). (If your table is not long enough, you can place your three makers at 15, 30, and 45 centimeters from the edge of the table.)
  • Position yourself at the edge of the table and bend your knees so your eye level is at the level of the markers.
  • Close or cover your right eye and look only with the left eye. Shift your head so all three markers are right behind the other. Is it possible to hide the second and third marker behind the first one?
  • Keep the position of your head the same but now close or cover the left eye and look only with the right eye. What do you see? In your image are the second and third marker still hidden behind the first one? Why do you think this happens?
  • Keeping your left eye covered, reposition yourself so the second and the third markers are hidden by the first one. Switch eyes with which you are looking again. Did it happen again? This time observe some details. In your image is the second marker to the right or the left of your first marker? What about the last marker? How far apart are the markers in your image? Do you see space between the first and the second markers? Do you see as much space between the second and the third markers (those that are farther away from you)? Are some markers still partially overlapping?
  • Open or uncover your right eye and look with both eyes. What do you see? Are any markers hidden by closer markers? Try to reposition your face so, in your image, the closer marker hides the more distant markers. Is it easy? Is it even possible?
  • Use a camera to study this in more detail. Position the camera so the first marker hides the second and third marker. The tops of the markers can stick out. Take a picture.
  • Shift your camera about 7.5 centimeters horizontally to the side, and take a second picture. Remember whether you shifted to the right or to the left. If you shift right, the first picture represents what the left eye sees. If you shift left, the first picture represents what your right eye sees.
  • Look at the pictures. These images reflect what your right and left eye register. (Human eyes are about 7.5 centimeters apart.) Are both pictures identical? In what way do they differ?
  • The brain uses the different location of objects in the images received by the right and the left eyes to create depth perception. Can you find some rules the brain uses? Which marker do you think shifts most with respect to a distant point or with respect to the last marker&mdashthe closer one or the one that is farther away?
  • In the first picture the three markers are lined up. In the second they are not. Measure how much the second marker is shifted with respect to the last marker. Now measure how much the first marker, which was positioned closer to the camera, is shifted with respect to the last marker. Does shift increase or decrease when objects are placed farther away from the observer?
  • Look at your second picture. Is your second marker shifted to the right or the left with respect to the last marker? What about the first marker? Is this direction identical to the direction in which you shifted your camera?
  • Can you imagine how the picture would look if you shifted the camera by about 7.5 centimeters to the other side? You can repeat part of the procedure where you take the pictures but now shift your camera to the other side to find out.
  • Extra: Study other parameters that might influence the shift. Do the markers shift more or less with respect to one another if you (as observer) position yourself farther away from the set of markers? What happens if you gaze at a point far in the background? (That is, compare the shift with respect to a point in the background.) Pictures can help you perform a more detailed analysis. A row of equally spaced trees, light poles or other objects along a straight street can also help you perform a more elaborate investigation.
  • Extra: Imagine what would happen if our eyes were separated by a longer horizontal distance. Do you think the horizontal shift would be larger or smaller?What do you think would happen if our eyes were shifted vertically instead of horizontally? Take pictures where you position the camera at slightly different locations in space to find out. Can you find some advantages and disadvantages to having eyes that are separated as they currently are in humans?
  • Extra: Adequate depth perception facilitates tasks such as playing ball, threading a needle and driving. To experience how difficult playing ball and threading a needle are with monocular (or one-eyed) vision, cover one eye and perform the task. Be careful, though this is difficult! Start by throwing a ball softly. Do not perform any dangerous tasks with one eye covered.

Observations and results
Did you see how your right eye registers the world differently from your left eye? Did you see how using both eyes created yet a different picture?

When you lined up the markers so your left eye could only see the first one, they were no longer lined up when you looked with the right eye only. Something similar happened when you lined up the markers for your right eye and you switched to a left-eye-only view. This time the markers were shifted to the right in your image. This happened because each eye looked at the row of markers from a slightly different angle.

With both eyes open it was probably very hard or impossible to position yourself so you only could see the first marker. Most people have a hard time fusing the images created by each eye in this particular setup. You might have experienced that you switched between images or had double vision.

The pictures you took with the camera allowed you to compare how much a closer marker shifted with respect to a more distant one. If you performed more tests, you might have discovered that the shift depends on the distance between the objects, the distance between you and the objects, and the point you are gazing at (also called the point of focus).

More to explore
Perception Lecture Notes: Depth, Size and Shape, from Professor David Heeger, Department of Psychology, New York University
Starry Science: Measure Astronomical Distances Using Parallax, from Scientific American.
Sight (Vision), from University of Washington

This activity brought to you in partnership with Science Buddies


Contents

The term binocular comes from two Latin roots, bini for double, and oculus for eye. [12]

Some animals - usually, but not always, prey animals - have their two eyes positioned on opposite sides of their heads to give the widest possible field of view. Examples include rabbits, buffaloes, and antelopes. In such animals, the eyes often move independently to increase the field of view. Even without moving their eyes, some birds have a 360-degree field of view.

Some other animals - usually, but not always, predatory animals - have their two eyes positioned on the front of their heads, thereby allowing for binocular vision and reducing their field of view in favor of stereopsis. However, front-facing eyes are a highly evolved trait in vertebrates, and there are only three extant groups of vertebrates with truly forward-facing eyes: primates, carnivorous mammals, and birds of prey.

Some predator animals, particularly large ones such as sperm whales and killer whales, have their two eyes positioned on opposite sides of their heads, although it is possible they have some binocular visual field. [13] Other animals that are not necessarily predators, such as fruit bats and a number of primates also have forward-facing eyes. These are usually animals that need fine depth discrimination/perception for instance, binocular vision improves the ability to pick a chosen fruit or to find and grasp a particular branch.

The direction of a point relative to the head (the angle between the straight ahead position and the apparent position of the point, from the egocenter) is called visual direction, or version. The angle between the line of sight of the two eyes when fixating a point is called the absolute disparity, binocular parallax, or vergence demand (usually just vergence). The relation between the position of the two eyes, version and vergence is described by Hering's law of visual direction.

In animals with forward-facing eyes, the eyes usually move together.

Eye movements are either conjunctive (in the same direction), version eye movements, usually described by their type: saccades or smooth pursuit (also nystagmus and vestibulo-ocular reflex). Or they are disjunctive (in opposite direction), vergence eye movements. The relation between version and vergence eye movements in humans (and most animals) is described by Hering's law of equal innervation.

Some animals use both of the above strategies. A starling, for example, has laterally placed eyes to cover a wide field of view, but can also move them together to point to the front so their fields overlap giving stereopsis. A remarkable example is the chameleon, whose eyes appear as if mounted on turrets, each moving independently of the other, up or down, left or right. Nevertheless, the chameleon can bring both of its eyes to bear on a single object when it is hunting, showing vergence and stereopsis.

Binocular summation is the process by which the detection threshold for a stimulus is lower with two eyes than with one. [14] There are various types of possibilities when comparing binocular performance to monocular. [14] Neural binocular summation occurs when the binocular response is greater than the probability summation. Probability summation assumes complete independence between the eyes and predicts a ratio ranging between 9-25%. Binocular inhibition occurs when binocular performance is less than monocular performance. This suggests that a weak eye affects a good eye and causes overall combined vision. [14] Maximum binocular summation occurs when monocular sensitivities are equal. Unequal monocular sensitivities decrease binocular summation. There are unequal sensitivities of vision disorders such as unilateral cataract and amblyopia. [14] Other factors that can affect binocular summation include are, spatial frequency, stimulated retinal points, and temporal separation. [14]

Apart from binocular summation, the two eyes can influence each other in at least three ways.

    . Light falling in one eye affects the diameter of the pupils in both eyes. One can easily see this by looking at a friend's eye while he or she closes the other: when the other eye is open, the pupil of the first eye is small when the other eye is closed, the pupil of the first eye is large. and vergence. Accommodation is the state of focus of the eye. If one eye is open and the other closed, and one focuses on something close, the accommodation of the closed eye will become the same as that of the open eye. Moreover, the closed eye will tend to converge to point at the object. Accommodation and convergence are linked by a reflex, so that one evokes the other. . The state of adaptation of one eye can have a small effect on the state of light adaptation of the other. Aftereffects induced through one eye can be measured through the other.

Once the fields of view overlap, there is a potential for confusion between the left and right eye's image of the same object. This can be dealt with in two ways: one image can be suppressed, so that only the other is seen, or the two images can be fused. If two images of a single object are seen, this is known as double vision or diplopia.

Fusion of images (commonly referred to as 'binocular fusion') occurs only in a small volume of visual space around where the eyes are fixating. Running through the fixation point in the horizontal plane is a curved line for which objects there fall on corresponding retinal points in the two eyes. This line is called the empirical horizontal horopter. There is also an empirical vertical horopter, which is effectively tilted away from the eyes above the fixation point and towards the eyes below the fixation point. The horizontal and vertical horopters mark the centre of the volume of singleness of vision. Within this thin, curved volume, objects nearer and farther than the horopters are seen as single. The volume is known as Panum's fusional area (it's presumably called an area because it was measured by Panum only in the horizontal plane). Outside of Panum's fusional area (volume), double vision occurs.

When each eye has its own image of objects, it becomes impossible to align images outside of Panum's fusional area with an image inside the area. [15] This happens when one has to point to a distant object with one's finger. When one looks at one's fingertip, it is single but there are two images of the distant object. When one looks at the distant object it is single but there are two images of one's fingertip. To point successfully, one of the double images has to take precedence and one be ignored or suppressed (termed "eye dominance"). The eye that can both move faster to the object and stay fixated on it is more likely to be termed as the dominant eye. [15]

The overlapping of vision occurs due to the position of the eyes on the head (eyes are located on the front of the head, not on the sides). This overlap allows each eye to view objects with a slightly different viewpoint. As a result of this overlap of vision, binocular vision provides depth. [16] Stereopsis (from stereo- meaning "solid" or "three-dimensional", and opsis meaning “appearance” or “sight”) is the impression of depth that is perceived when a scene is viewed with both eyes by someone with normal binocular vision. [16] Binocular viewing of a scene creates two slightly different images of the scene in the two eyes due to the eyes' different positions on the head. These differences, referred to as binocular disparity, provide information that the brain can use to calculate depth in the visual scene, providing a major means of depth perception. [16] There are two aspects of stereopsis: the nature of the stimulus information specifying stereopsis, and the nature of the brain processes responsible for registering that information. [16] The distance between the two eyes on an adult is almost always 6.5 cm and that is the same distance in shift of an image when viewing with only one eye. [16] Retinal disparity is the separation between objects as seen by the left eye and the right eye and helps to provide depth perception. [16] Retinal disparity provides relative depth between two objects, but not exact or absolute depth. The closer objects are to each other, the retinal disparity will be small. If the objects are farther away from each other, then the retinal disparity will be larger. When objects are at equal distances, the two eyes view the objects as the same and there is zero disparity. [16]

Because the eyes are in different positions on the head, any object away from fixation and off the plane of the horopter has a different visual direction in each eye. Yet when the two monocular images of the object are fused, creating a Cyclopean image, the object has a new visual direction, essentially the average of the two monocular visual directions. This is called allelotropia. [7] The origin of the new visual direction is a point approximately between the two eyes, the so-called cyclopean eye. The position of the cyclopean eye is not usually exactly centered between the eyes, but tends to be closer to the dominant eye.

When very different images are shown to the same retinal regions of the two eyes, perception settles on one for a few moments, then the other, then the first, and so on, for as long as one cares to look. This alternation of perception between the images of the two eyes is called binocular rivalry. [17] Humans have limited capacity to process an image fully at one time. That is why the binocular rivalry occurs. Several factors can influence the duration of gaze on one of the two images. These factors include context, increasing of contrast, motion, spatial frequency, and inverted images. [17] Recent studies have even shown that facial expressions can cause longer attention to a particular image. [17] When an emotional facial expression is presented to one eye, and a neutral expression is presented to the other eye, the emotional face dominates the neutral face and even causes the neutral face to not been seen. [17]

To maintain stereopsis and singleness of vision, the eyes need to be pointed accurately. The position of each eye in its orbit is controlled by six extraocular muscles. Slight differences in the length or insertion position or strength of the same muscles in the two eyes can lead to a tendency for one eye to drift to a different position in its orbit from the other, especially when one is tired. This is known as phoria. One way to reveal it is with the cover-uncover test. To do this test, look at a cooperative person's eyes. Cover one eye of that person with a card. Have the person look at your finger tip. Move the finger around this is to break the reflex that normally holds a covered eye in the correct vergence position. Hold your finger steady and then uncover the person's eye. Look at the uncovered eye. You may see it flick quickly from being wall-eyed or cross-eyed to its correct position. If the uncovered eye moved from out to in, the person has esophoria. If it moved from in to out, the person has exophoria. If the eye did not move at all, the person has orthophoria. Most people have some amount of exophoria or esophoria it is quite normal. If the uncovered eye also moved vertically, the person has hyperphoria (if the eye moved from down to up) or hypophoria (if the eye moved from up to down). Such vertical phorias are quite rare. It is also possible for the covered eye to rotate in its orbit, such a condition is known as cyclophoria. They are rarer than vertical phorias. Cover test may be used to determine direction of deviation in cyclophorias also. [18]

The cover-uncover test can also be used for more problematic disorders of binocular vision, the tropias. In the cover part of the test, the examiner looks at the first eye as he or she covers the second. If the eye moves from in to out, the person has exotropia. If it moved from out to in, the person has esotropia. People with exotropia or esotropia are wall-eyed or cross-eyed respectively. These are forms of strabismus that can be accompanied by amblyopia. There are numerous definitions of amblyopia. [14] A definition that incorporates all of these defines amblyopia as a unilateral condition in which vision in worse than 20/20 in the absence of any obvious structural or pathologic anomalies, but with one or more of the following conditions occurring before the age of six: amblyogenic anisometropia, constant unilateral esotropia or exotropia, amblyogenic bilateral isometropia, amblyogenic unilateral or bilateral astigmatism, image degradation. [14] When the covered eye is the non-amblyopic eye, the amblyopic eye suddenly becomes the person's only means of seeing. The strabismus is revealed by the movement of that eye to fixate on the examiner's finger. There are also vertical tropias (hypertropia and hypotropia) and cyclotropias.

Binocular vision anomalies include: diplopia (double vision), visual confusion (the perception of two different images superimposed onto the same space), suppression (where the brain ignores all or part of one eye's visual field), horror fusionis (an active avoidance of fusion by eye misalignment), and anomalous retinal correspondence (where the brain associates the fovea of one eye with an extrafoveal area of the other eye).

Binocular vision anomalies are among the most common visual disorders. They are usually associated with symptoms such as headaches, asthenopia, eye pain, blurred vision, and occasional diplopia. [19] About 20% of patients who come to optometry clinics will have binocular vision anomalies. [19] The most effective way to diagnosis vision anomalies is with the near point of convergence test. [19] During the NPC test, a target, such as a finger, is brought towards the face until the examiner notices that one eye has turned outward and/or the person has experienced diplopia or doubled vision. [19]

Up to a certain extent, binocular disparities can be compensated for by adjustments of the visual system. If, however, defects of binocular vision are too great – for example if they would require the visual system to adapt to overly large horizontal, vertical, torsional or aniseikonic deviations – the eyes tend to avoid binocular vision, ultimately causing or worsening a condition of strabismus.


Defining the Embryonic Stage

After a blastocyst implants in the uterus around the end of the first week after fertilization, its internal cell mass, which was called the embryoblast, is now known as the embryo. The embryonic stage lasts through the eighth week following fertilization, after which the embryo is called a fetus. The embryonic stage is short, lasting only about seven weeks in total, but developments that occur during this stage bring about enormous changes in the embryo. During the embryonic stage, the embryo becomes not only bigger but also much more complex. Figure (PageIndex<2>) shows an eight to nine week old embryo. The embryo's finger, toes, head, eyes, and other structures are visible. It is no exaggeration to say that the embryonic stage lays the necessary groundwork for all of the remaining stages of life.

Figure (PageIndex<2>): An eight to nine-week-old embryo


Taste and Smell

Taste and smell are both abilities to sense chemicals, so both taste and olfactory (odor) receptors are chemoreceptors . Both types of chemoreceptors send nerve impulses to the brain along sensory nerves, and the brain “tells” us what we are tasting or smelling.

Taste receptors are found in tiny bumps on the tongue called taste buds .You can see a diagram of a taste receptor cell and related structures in Figure 8.7.10. Taste receptor cells make contact with chemicals in food through tiny openings called taste pores . When certain chemicals bind with taste receptor cells, it generates nerve impulses that travel through afferent nerves to the CNS. There are separate taste receptors for sweet, salty, sour, bitter, and meaty tastes. The meaty — or savory — taste is called umami.

Figure 8.7.10 Taste receptor cells are located in taste buds on the tongue. Basal cells are not involved in tasting, but differentiate into taste receptor cells. Taste receptor cells are replaced about every nine to ten days.

Figure 8.7.11 The yellow structures inside this drawing of the nasal passages are an olfactory nerve with many nerve endings. The nerve endings sense chemicals in the air as it passes through the nasal cavities.

Lenses, Diffraction and Aberrations

Lenses and Accommodation

What prevents the optics of our eye from focusing the image perfectly? To answer this question we should consider why a lens is useful in bringing objects to focus at all.

Figure 2.16: Snell's law. The solid lines indicate surface normals and the dashed lines indicate the light ray. (a) When a light ray passes from one medium to another, the ray can be refracted so that the angle of incidence (phi) does not equal the angle of refraction (phi '). Instead, the angle of refraction depends on the refractive indices of the new media (n and n') a relationship called Snell's law that is defined in Equation :snell (after Jenkins and White figures 1H page 15 and 2H page 30.) (b) A prism causes two refractions of the light ray and can reverse the ray's direction from upward to downward. (c) A lens combines the effect of many prisms in order to converge the rays diverging from a point source. (After Jenkins and White figure 1F, page 12.).

As a ray of light is reflected from an object, it will travel along along a straight line until it reaches a new material boundary. At that point, the ray may be either absorbed by the new medium, reflected, or refracted. The latter two possibilities are illustrated in part (a) of Figure 2.16. We call the angle between the incident ray of light and the perpendicular to the surface the angle of incidence. The angle between the reflected ray and the perpendicular to the surface is called the angle of reflection, and it equals the angle of incidence. Of course, reflected light is not useful for image formation at all.

The useful rays for imaging must pass from the first medium into the second. As they pass from between the two media, the ray’s direction is refracted. The angle between the refracted ray and the perpendicular to the surface is called the angle of refraction.

The relationship between the angle of incidence and the angle of refraction was first discovered by a Dutch astronomer and mathematician, Willebrord Snell in 1621. He observed that when is the angle of incidence, and is the angle of refraction, then

(14)

The terms and in Equation 14 are the refractive indices of the two media. The refractive index of an optical medium is the ratio of the speed of light in a vacuum to the speed of light in the optical medium. The refractive index of glass is , for water the refractive index is and for air it is nearly . The refractive index of the human cornea is is quite similar to water, which is the main content of our eyes.

Now, consider the consequence of applying Snell’s law twice in a row as light passes into and then out of a prism, as illustrated in part (b) of Figure 2.16. We can draw the path of the ray as it enters the prism using Snell’s law. The symmetry of the prism and the reversibility of the light path makes it easy to draw the exit path. Passage through the prism bends the ray’s path downward. The prism causes the light to deviate significantly from a straight path the amount of the deviation depends upon the angle of incidence and the angle between the two sides of the prism.

We can build a lens by smoothly combining many infintesimally small prisms to form a convex lens, as illustrated in part (c) of Figure 2.16. In constructing such a lens, any deviations from the smooth shape, or imperfections in the material used to build the lens, will cause the individual rays to be brought to focus at slightly different points in the image plane. These small deviations of shape or materials are a source of the imperfections in the image.

Objects at different depths are focused at different distances behind the lens. The lensmaker’s equation relates the distance between the source and the lens with the distance between the image and the lens. The lensmaker’s equation relating these two distances depends on focal length of the lens. Call the distance from the center of the lens to the source , the distance to the image , and the focal length of the lens, . Then the lensmaker’s equation is

(15)

From this equation, notice that we can measure the focal length of a convex thin lens by using it to image a very distant object. In that case, the term is zero so that the image distance is equal to the focal length. When I first moved to California, I spent a lot of time measuring the focal length of the lenses in my laboratory by going outside and imaging the sun on a piece of paper behind the lens the sun was a convenient source at optical infinity. It had been a less reliable source for me in my previous home.

Figure 2.17: Depth of Field in the Human Eye. Image distance is shown as a function of source distance. The bar on the vertical axis shows the distance of the retina from the lens center. A lens power of 60 diopters brings distant objects into focus, but not nearby objects to bring nearby objects into focus the power of the lens must increase. The depth of field, namely the distance over which objects will continue to be in reasonable focus, can be estimated from the slope of the curve.

The optical power of a lens is a measure of how strongly the lens bends the incoming rays. Since a short focal length lens bends the incident ray more than a long focal length lens, the optical power is the inversely related to focal length. The optical power is defined as the reciprocal of the focal length measured in meters and is specified in units of diopters. When we view far away objects, the distance from the middle of the cornea and the flexible lens to the retina is 0.017m. Hence, the optical power of the human eye is , or roughly 60 diopters.

From the optical power of the eye () and the lensmaker’s equation, we can calculate the image distance of a source at distance. For example, the top curve in Figure 2.17 shows the relationship between image distance and source distance for a 60 diopter lens. Sources beyond 1.0m are imaged at essentially the same distance behind the optics. Sources closer than 1.0m are imaged at a longer distance, so that the retinal image is blurred.

To bring nearby sources into focus on the retina, muscles attached to the lens change its shape and thus change the power of the lens. The bottom two curves in Figure 2.17 illustrate that sources closer than 1.0m can be focused onto the retina by increasing the power of the lens. The process of adjusting the focal length of the lens is called accommodation. You can see the effect of accommodation by first focusing on your finger placed near your noise and noticing that objects in the distance appear blurred. Then, while leaving your finger in place, focus on the distant objects. You will notice that your finger now appears blurred.

Pinhole Optics and Diffraction

The only way to remove lens imperfections completely is to remove the lens. It is possible to focus images without any lens at all by using pinholeoptics, as illustrated in Figure 2.18.

Figure 2.18: Pinhole Optics. Using ray-tracing, we see that only a small pencil of rays passes through a pinhole. (a) If we widen the pinhole, light from the source spread across the image, making it blurry. (b) If we narrow the pinhole, only a small amount of light is let in. The image is sharp the sharpness is limited by diffraction.

A pinhole serves as a useful focusing element because only the rays passing within a narrow angle are used to form the image. As the pinhole is made smaller, the angular deviation is reduced. Reducing the size of the pinhole serves to reduce the amount of blur due to the deviation amongst the rays. Another advantage of using pinhole optics is that no matter how distant the source point is from the pinhole, the source is rendered in sharp focus. Since the focusing is due to selecting out a thin pencil of rays, the distance of the point from the pinhole is irrelevant and accommodation is unnecessary.

But the pinhole design has two disadvantages. First, as the pinhole aperture is reduced, less and less of the light emitted from the source is used to form the image. The reduction of signal has many disadvantages for sensitivity and acuity.

A second fundamental limit to the pinhole design is a physical phenomenon. When light passes through a small aperture, or near the edge of an aperture, the rays do not travel in a single straight line. Instead, the light from a single ray is scattered into many directions and produces a blurry image. The dispersion of light rays that pass by an edge or narrow aperture is called diffraction. Diffraction scatters the rays coming from a small source across the retinal image and therefore serves to defocus the image. The effect of diffraction when we take an image using pinhole optics is shown in Figure 2.19.

Figure 2.19: Diffraction limits the quality of pinhole optics. The three images of a bulb filament were imaged using pinholes with decreasing size. (a) When the pinhole is relatively large, the image rays are not properly converged and the image is blurred. (b) Reducing the pinhole improves the focus. (c) Reducing the pinhole further worsens the focus due to diffraction.

Diffraction can be explained in two different ways. First, diffraction can be explained by thinking of light as a wave phenomenon. A wave exiting from a small aperture expands in all directions a pair of coherent waves from adjacent apertures create an interference pattern. Second diffraction can be understood in terms of quantum mechanics indeed, the explanation of diffraction is one of the important achievements of quantum mechanics. Quantum mechanics supposes that there are limits to how well we may know both the position and direction of travel of a photon of light. The more we know about a photon’s position, the less we can know about its direction. If we know that a photon has passed through a small aperture, then we know something about the photon’s position and we must pay a price in terms of our uncertainty concerning its direction of travel. As the aperture becomes smaller, our certainty concerning the position of the photon becomes greater this uncertainty takes the form of the scattering of the direction of travel of the photons as they pass through the aperture. For very small apertures, for which our position certainty is high, the photon’s direction of travel is very broad producing a very blurry image.

There is a close relationship between the uncertainty in the direction of travel and the shape of the aperture (see Figure 2.20). In all cases, however, when the aperture is relatively large, our knowledge of the spatial position of the photons is insignificant and diffraction does not contribute to defocus. As the pupil size decreases, and we know more about the position of the photons, the diffraction pattern becomes broader and spoils the focus.

Figure 2.20 Diffraction: Diffraction pattern caused by a circular aperture. (a) The image of a diffraction pattern measured through a circular aperture. (b) A graph of the cross-sectional intensity of the diffraction pattern. (After Goodman, 1968).

In the human eye diffraction occurs because light must pass through the circular aperture defined by the pupil. When the ambient light intensity is high, the pupil may become as small mm in diameter. For a pupil opening this small, the optical blurring in the human eye is due only to the small region of the cornea and lens near the center of our visual field. With this small an opening of the pupil, the quality of the cornea and lens is rather good and the main source of image blue is diffraction. At low light intensities, the pupil diameter is as large as 8

mm. When the pupil is open quite wide, the distortion due to cornea and lens imperfections is large compared to the defocus due to diffraction.

One way to evaluate the quality of the optics is to compare the blurring of the eye to the blurring from diffraction alone. The dashed lines in Figure 2.12 plot the blurring expected from diffraction for different pupil widths. Notice that when the pupil is 2.4

mm, the observed linespread is about equal to the amount expected by diffraction alone the lens causes no further distortion. As the pupil opens, the observed linespread is worse than the blurring expected by diffraction alone. For these pupil sizes the defocus is due mainly to imperfections in the optics 4 .

4 Helmholtz calculated that this was so long before any precise measurements of the optical quality of the eye were possible. He wrote,”The limit of the visual capacity of the eye as imposed by diffraction, as far as it can be calculated, is attained by the visual acuity of the normal eye with a pupil of the size corresponding to a good illumination.”% From Helmholtz, Phys. Optics I, page 442 (Helmholtz, 1909, p. 442)

The Pointspread Function and Astigmatism

Most images, of course, are not composed of weighted sums of lines. The set of images that can be formed from sums of lines oriented in the same direction are all one-dimensional patterns. To create more complex images, we must either use lines with different orientations or use a different fundamental stimulus: the point.

Any two-dimensional image can be described as the sum of a set of points. If the system we are studying is linear and shift-invariant, we can use the response to a point and the principle of superposition to predict the response of a system to any two-dimensional image. The measured response to a point input is called the function. A pointspread function and the superposition of two nearby pointspreads are illustrated in Figure 2.21.

Figure 2.21 Pointspread Function: A pointspread function. (a) and the sum of two pointspreads (b). The pointspread function is the image created by a source consisting of a small point of light. When the optics shift-invariant, the image to any stimulus can be predicted from the pointspread function.

Since lines can be formed by adding together many different points, we can compute the system’s linespread function from the pointspread. In general, we cannot deduce the pointspread function from the linespread because there is no way to add a set of lines, all oriented in the same direction, to form a point. If it is know that a pointspread function is circularly symmetric, however, a unique pointspread function can be deduced from the linespread function. The calculation is described in the beginning of Goodman (1968) and in Yellott, Wandell and Cornsweet (1981).

Figure 2.22 Astigmatism: Astigmatism implies an asymmetric pointspread function. The pointspread shown here is narrow in one direction and wide in another. The spatial resolution of an astigmatic system is better in the narrow direction than the wide direction.

When the pointspread functions is not circularly symmetric, measurements of the linespread function will vary with the orientation of the test line. It may be possible to adjust the accommodation of this type of system so that any single orientation is in good focus, but it will be impossible to bring all orientations into good focus at the same time. For the human eye, astigmatism can usually be modeled by describing the defocus as being derived from the contributions of two one-dimensional systems at right angles to one another. The defocus in intermediate angles can be predicted from the defocus of these two systems.

Chromatic Aberration

Figure 2.23 Chromatic Aberration: Chromatic aberration of the human eye. (a) The data points are from Wald and Griffin (1947), and Bedford and Wyszecki (1957). The smooth curve plots the formula used by Thibos et al. (1992), D(λ) = p - q / (λ - c ) where λ is wavelength in micrometers, D(λ) is the defocus in diopters, p =1.7312, q = 0.63346, and c = 0.21410. This formula implies an in-focus wavelength of 578 nm. (b) The power of a thin lens is the reciprocal of its focal length, which is the image distance from a source at infinity. (After Marimont and Wandell, 1993).

The light incident at the eye is usually a mixture of different wavelengths. When we measure the system response, there is no guarantee that the linespread or pointspread function we measure with different wavelengths will be the same. Indeed, for most biological eyes the pointspread function is very different as we measure using different wavelengths of light. When the pointspread function of different wavelengths of light is quite different, then the lens is said to exhibit chromatic aberration.

When the incident light is the mixture of many different wavelengths, say white light, then we can see a chromatic fringe at edges. The fringe occurs because the different wavelength components of the white light are focused more or less sharply. Figure 2.23a plots one measure of the chromatic aberration. The smooth curve plots the lens power, measured in units of em diopters needed to bring each wavelength into focus along with a 578nm light.

Figure 2.23 shows the optical power of a lens necessary to correct for the chromatic aberration of the eye. When the various wavelengths pass through the correcting lens, the optics will have the same power as the eye’s optics at 578nm. The two sets of measurements agree well with one another and are similar to what would be expected if the eye were simply a bowl of water. The smooth curve through the data is a curve used by Thibos et al. (1992) to predict the data.

An alternative method of representing the axial chromatic aberration of the eye is to plot the modulation transfer function at different wavelengths. The two surface plots in Figure 2.24 shows the modulation transfer function at a series of wavelengths. The plots show the same data, but seen from different points of view so that you can see around the hill. The calculation in the figure is based on an eye with a pupil diameter of 3.0mm, the same chromatic aberration as the human eye, and in perfect focus except for diffraction at 580nm.

Figure 2.24 OTF of Chromatic Aberration: Two views of the modulation transfer function of a model eye at various wavelengths. The model eye has the same chromatic aberration as the human eye (see Figure 2.23) and a 3.0mm pupil diameter. The eye is in focus at 580nm the curve at 580nm is diffraction limited. The retinal image has no contrast beyond four cycles per degree at short wavelengths.(From Marimont and Wandell, 1993).

The retinal image contains very poor spatial information at wavelengths that are far from the best plane of focus. By accommodation, the human eye can place any wavelength into good focus, but it is impossible to focus all wavelengths simultaneously 5 .

5 A possible method of improving the spatial resolution of the eye to different wavelengths of light is to place the different classes of photoreceptors in slightly different image planes. Ahnelt et al. (1987) and Curcio et al. (1991) have observed that the short-wavelength photoreceptors have a slightly different shape and length from the middle- and long-wavelength photoreceptors. In principle, this difference could play a role to compensate for the chromatic aberration of the eye. But, the difference is very small, and it is unlikely that it plays any significant role in correcting for axial chromatic aberration.


Watch the video: See Both (May 2022).