Camera and the eye (filtering information)

Camera and the eye (filtering information)

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

How does a camera recreate a very similar perceptual stimulus (through the photo) for us to see compared to the person directly viewing the object of interest with their eye?

My questions stems from what I've already learned about the visual system:

1) The retina receives information from the external world, but not 100% of the photons (the form of this information).

2) This information is then passed to the superior colliculus where further filtering, and changes to this information occurs.

3) It is next passed to the thalamus and then relays it to the occiptal lobe which interprets this information (another filtering process) causing us to perceive the external world in a particular way.

Is it that the camera captures the light information on the film that would otherwise have made contact with our retina? And then upon seeing the photo, that same information (or similar since the camera would not exactly capture it, but enough for us to notice that we are seeing what we expected to see) finally makes contact with the retina and then the process occurs as described in the points above?

If so, how does the camera do this!? Because I thought the camera would also be selective in which light it uses to recreate these images. And I'm incredibly doubtful that a camera simulates the filtering processes of the human brain. Moreover, it doesn't make sense that we would perceive the expected image in the photo if it was already based on the filtered information.

I am very curious and very confused. :)


Unique camera enables researchers to see the world the way birds do

Using a specially designed camera, researchers at Lund University in Sweden have succeeded for the first time in recreating how birds see colours in their surroundings. The study reveals that birds see a very different reality compared to what we see.

Human colour vision is based on three primary colours: red, green and blue. The colour vision of birds is based on the same three colours -- but also ultraviolet. Biologists at Lund have now shown that the fourth primary colour of birds, ultraviolet, means that they see the world in a completely different way. Among other things, birds see contrasts in dense forest foliage, whereas people only see a wall of green.

"What appears to be a green mess to humans are clearly distinguishable leaves for birds. No one knew about this until this study," says Dan-Eric Nilsson, professor at the Department of Biology at Lund University.

For birds, the upper sides of leaves appear much lighter in ultraviolet. From below, the leaves are very dark. In this way the three-dimensional structure of dense foliage is obvious to birds. This in turn makes it easy for them to move, find food and navigate. People, on the other hand, do not perceive ultraviolet, and see the foliage in green the primary colour where contrast is the worst.

Dan-Eric Nilsson founded the world-leading Lund Vision Group at Lund University. The study in question is a collaboration with Cynthia Tedore and was conducted during her time as a postdoc in Lund. She is now working at the University of Hamburg.

It is the first time that researchers have succeeded in imitating bird colour vision with a high degree of precision. This was achieved with the help of a unique camera and advanced calculations. The camera was designed within the Lund Vision Group and equipped with rotating filter wheels and specially manufactured filters, which make it possible to show what different animals see clearly. In this case, the camera imitates with a high degree of accuracy the colour sensitivity of the four different types of cones in bird retinas.

"We have discovered something that is probably very important for birds, and we continue to reveal how reality appears also to other animals," says Dan-Eric Nilsson, continuing:

"We may have the notion that what we see is the reality, but it's a highly human reality. Other animals live in other realities, and we can now see through their eyes and reveal many secrets. Reality is in the eye of the beholder," he concludes.

Evolution of the eye

They appeared in an evolutionary blink and changed the rules of life for ever. Before eyes, life was gentler and tamer, dominated by sluggish soft-bodied creatures lolling around in the sea. The invention of the eye ushered in a more brutal and competitive world. Vision made it possible for animals to become active hunters, and sparked an evolutionary arms race that transformed the planet.

When did eyes evolve?

The first eyes appeared about 541 million years ago – at the very beginning of the Cambrian period when complex multicellular life really took off – in a group of now extinct animals called trilobites which looked a bit like large marine woodlice. Their eyes were compound, similar to those of modern insects. And their appearance in the fossil record is strikingly sudden. Trilobite ancestors from 544 million years ago don’t have eyes. So what happened in that magic million years? Surely eyes, with their interconnected assemblage of retina, lens, pupil and optic nerve, are just too complex to appear all of a sudden?

How did eyes evolve?


The complexity of the eye has long been an evolutionary battleground. Ever since William Paley came up with the watchmaker analogy in 1802 – which claimed that something as complex as a watch must have a maker – creationists have used it to make the “argument from design”. Eyes are so intricate, they say, that it stretches credibility to suggest they evolved through the selection and accumulation of random mutations.

Charles Darwin was well aware of the argument. In On the Origin of Species he admitted that eyes were so complex that their evolution seemed “absurd to the highest degree”. But he went on to convincingly argue that it only seemed absurd. Complex eyes could have evolved from very simple ones by natural selection as long as each gradation was useful. The key to the puzzle, Darwin said, was to find eyes of intermediate complexity in the animal kingdom that would demonstrate a possible path from simple to sophisticated.

Those intermediate forms have now been found. According to evolutionary biologists, it would have taken less than half a million years for the most rudimentary eye to evolve into a complex “camera” eye like ours.

The first step is to evolve light-sensitive cells. This appears to be a trivial matter. Many single-celled organisms have eyespots made of light-sensitive pigments. Some can even swim towards or away from light. Such rudimentary light-sensing abilities confer an obvious survival advantage.

The next step was for multicellular organisms to concentrate their light-sensitive cells into a single location. Patches of photosensitive cells were probably common long before the Cambrian, allowing early animals to detect light and sense what direction it was coming from. Such rudimentary visual organs are still used by jellyfish and flatworms and other primitive groups, and are clearly better than nothing.

The first eyes in nature

The simplest organisms with photosensitive patches are hydras – freshwater creatures related to jellyfish. They have no eyes but will contract into a ball when exposed to bright light. Hydras are interesting from an evolutionary perspective because their basic light-sensing equipment is very similar to that seen in other evolutionary lineages, including mammals. It is based on two types of protein: opsins, which change shape when light strikes them, and ion channels, which respond to the shape-shifting by generating an electrical signal. Genetic research suggests that all opsin/ion channel systems evolved from a common ancestor similar to hydras, pointing to a single evolutionary origin of all visual systems.

The next step is to evolve a small depression containing the light-sensitive cells. This makes it easier to discriminate the direction the light is coming from and hence sense movement. The deeper the pit, the sharper the discrimination.

Further improvement can then be made by narrowing the opening of the pit so that light enters through a small aperture, like a pinhole camera. With this sort of equipment it becomes possible for the retina to resolve images – a vast improvement on previous models. Pinhole camera eyes, lacking a lens and cornea, are found in the nautilus today.

The final big change is to evolve a lens. This probably started out as a protective layer of skin that grew over the opening. But it evolved into an optical instrument capable of focusing light on to the retina. Once that happened, the effectiveness of the eye as an imaging system went through the roof, from about 1 per cent to 100 per cent.

Eyes of this kind are still found in cubozoans, highly mobile and venomous marine predators similar to jellyfish. They have 24 eyes arranged in four clusters 16 are simply light-sensitive pits, but one pair in each cluster is complex, with a sophisticated lens, retina, iris and cornea.

Trilobites went down a slightly different route, evolving compound eyes with multiple lenses. But the basic sequence of events was the same.

Trilobites weren’t the only animals to stumble across this invention, although they were the first. Biologists believe that eyes evolved independently on many, possibly hundreds, of occasions.

And what a difference it made. In the sightless world of the Early Cambrian, vision was tantamount to a superpower. Trilobites became the first active predators, able to seek out and chase down prey like no animal before. Unsurprisingly, their victims counter-evolved. Just a few million years later, eyes were everywhere and animals were more active and bristling with armour. This burst of evolutionary innovation is what we now know as the Cambrian Explosion.

However, sight is not universal. Of 37 phyla of multicellular animals, only six have evolved it. But these six – including our own phylum, chordates, plus arthropods and molluscs – are the most abundant, widespread and successful animals on the planet.

How Do We See?

Detailed diagram of the eye and its parts. Click for more detail.

Take a look around you. What do you see? You might see a computer or phone with a shining, colorful screen. A piece of paper may be under your left hand and a sharpened pencil in your right hand. While you look at these objects with your eyes, your brain is what is recognizing the objects. Many people take sight for granted, but how are you able to see and register objects?

You probably already know that your body has five senses that help you experience the world around you. These senses are touch, taste, hearing, smell, and sight. Although all of your senses are important, many people think that sight would be the most difficult one to live without.

If you could not see, how would you watch TV, cook food and not burn yourself, or walk across the street without being hit by a car? Many people do all kinds of activities without being able to see. Let's learn a bit more about how vision works.

A comparison of a camera and an eye. Click for more detail.

The information that some animals receive through their eyes is called “visual information” or “vision.” For now, let's think of the eye as a sort of camera.

Researchers solve mystery of deep-sea fish with tubular eyes and transparent head

The barreleye ( Macropinna microstoma ) has extremely light-sensitive eyes that can rotate within a transparent, fluid-filled shield on its head. The fish’s tubular eyes are capped by bright green lenses. The eyes point upward (as shown here) when the fish is looking for food overhead. They point forward when the fish is feeding. The two spots above the fish’s mouth are are olfactory organs called nares, which are analogous to human nostrils. Image: © 2004 MBARI

Researchers at the Monterey Bay Aquarium Research Institute recently solved the half-century-old mystery of a fish with tubular eyes and a transparent head. Ever since the “barreleye” fish Macropinna microstoma was first described in 1939, marine biologists have known that it’s tubular eyes are very good at collecting light. However, the eyes were believed to be fixed in place and seemed to provide only a “tunnel-vision” view of whatever was directly above the fish’s head. A new paper by Bruce Robison and Kim Reisenbichler shows that these unusual eyes can rotate within a transparent shield that covers the fish’s head. This allows the barreleye to peer up at potential prey or focus forward to see what it is eating.

Deep-sea fish have adapted to their pitch-black environment in a variety of amazing ways. Several species of deep-water fishes in the family Opisthoproctidae are called “barreleyes” because their eyes are tubular in shape. Barreleyes typically live near the depth where sunlight from the surface fades to complete blackness. They use their ultra-sensitive tubular eyes to search for the faint silhouettes of prey overhead.

In this image, you can see that, although the barreleye is facing downward, its eyes are still looking straight up. This close-up “frame grab” from video shows a barreleye that is about 140 mm (six inches) long. Image: © 2004 MBARI

Although such tubular eyes are very good at collecting light, they have a very narrow field of view. Furthermore, until now, most marine biologists believed that barreleye’s eyes were fixed in their heads, which would allow them to only look upward. This would make it impossible for the fishes to see what was directly in front of them, and very difficult for them to capture prey with their small, pointed mouths.

Robison and Reisenbichler used video from MBARI’s remotely operated vehicles (ROVs) to study barreleyes in the deep waters just offshore of Central California. At depths of 600 to 800 meters (2,000 to 2,600 feet) below the surface, the ROV cameras typically showed these fish hanging motionless in the water, their eyes glowing a vivid green in the ROV’s bright lights. The ROV video also revealed a previously undescribed feature of these fish–its eyes are surrounded by a transparent, fluid-filled shield that covers the top of the fish’s head.

This face-on view of a barreleye shows it’s transparent shield lit up by the lights of MBARI’s remotely operated vehicle Tiburon . As in the other photos, the two spots above the fish’s mouth are are olfactory organs called nares, which are analogous to human nostrils. Image: © 2006 MBARI

Most existing descriptions and illustrations of this fish do not show its fluid-filled shield, probably because this fragile structure was destroyed when the fish were brought up from the deep in nets. However, Robison and Reisenbichler were extremely fortunate–they were able to bring a net-caught barreleye to the surface alive, where it survived for several hours in a ship-board aquarium. Within this controlled environment, the researchers were able to confirm what they had seen in the ROV video–the fish rotated its tubular eyes as it turned its body from a horizontal to a vertical position.

In addition to their amazing “headgear,” barreleyes have a variety of other interesting adaptations to deep-sea life. Their large, flat fins allow them to remain nearly motionless in the water, and to maneuver very precisely (much like MBARI’s ROVs). Their small mouths suggest that they can be very precise and selective in capturing small prey. On the other hand, their digestive systems are very large, which suggests that they can eat a variety of small drifting animals as well as jellies. In fact, the stomachs of the two net-caught fish contained fragments of jellies.

After documenting and studying the barreleye’s unique adaptations, Robison and Reisenbichler developed a working hypothesis about how this animal makes a living. Most of the time, the fish hangs motionless in the water, with its body in a horizontal position and its eyes looking upward. The green pigments in its eyes may filter out sunlight coming directly from the sea surface, helping the barreleye spot the bioluminescent glow of jellies or other animals directly overhead. When it spots prey (such as a drifting jelly), the fish rotates its eyes forward and swims upward, in feeding mode.

MBARI researchers speculate that Macropinna microstoma may eat animals that have been captured in the tentacles of jellies, such as this siphonophore in the genus Apolemia . The “head” of the siphonophore (at right) pulls the animal through the water, its stinging tentacles streaming out like a living drift net. Image: © 2001 MBARI

Barreleyes share their deep-sea environment with many different types of jellies. Some of the most common are siphonophores (colonial jellies) in the genus Apolemia. These siphonophores grow to over 10 meters (33 feet) long. Like living drift nets, they trail thousands of stinging tentacles, which capture copepods and other small animals. The researchers speculate that barreleyes may maneuver carefully among the siphonophore’s tentacles, picking off the captured organisms. The fish’s eyes would rotate to help the fish keep its “eyes on the prize,” while its transparent shield would protect the fish’s eyes from the siphonophore’s stinging cells.

Robison and Reisenbichler hope to do further research to find out if their discoveries about Macropinna microstoma also apply to other deep-sea fish with tubular eyes. The bizarre physiological adaptations of the barreleyes have puzzled oceanographers for generations. It is only with the advent of modern underwater robots that scientists have been able to observe such animals in their native environment, and thus to fully understand how these physical adaptations help them survive.

Video of barreleye narrated by Bruce Robison:

B. H. Robison and K. R. Reisenbichler. Macropinna microstoma and the paradox of its tubular eyes. Copeia. 2008, No. 4, December 18, 2008.

Color vision

The distance between the peaks in one cycle of an electromagnetic wave is its wavelength (symbol &lambda), measured in nanometers (billionths of a meter). The number of wave peaks within a standard distance is the wavenumber, the reciprocal of wavelength (1/&lambda), which must be multiplied by 10 million to yield waves per centimeter. Thus, a wavelength of 500 nm equals a wavenumber of 1/500*10 7 or 20,000 waves per centimeter.

Light waves increase in frequency (number of cycles per second) as the radiation increases in energy "short" wavelength, high frequency light has roughly twice the energy of "long" wavelength, low frequency light.

Frequency is a constant property of light at a given energy. When light passes through a transmitting (translucent or transparent) material, the speed of light and the corresponding wavelength of light are reduced somewhat, although the frequency of the light remains unchanged. This produces the characteristic refraction or "bending" of light as the light waves cross the boundary between different media, such as air and water or air and glass.

The ratio between the speed of light in air and its speed through a transmitting medium — which determines the amount of bending produced in the light beam — is the refractive index of the medium. The baseline wavelength and speed of light are usually measured in air at the earth's surface.

Describing Light & Color . Light is the electromagnetic radiation that stimulates the eye. This stimulation depends on both energy (frequency, expressed as wavelength) and quantity of light (number of photons).

the visible electromagnetic spectrum

spectrum colors as produced by a diffraction grating (IR = infrared, UV = ultraviolet) for a discussion of spectral color reproduction on a computer monitor, see the Rendering Spectra page by Andrew Young

The figure shows the visible spectrum on a wavelength scale, roughly as it appears in sunlight reflected from a diffraction grating (such as a compact disc), which produces an equal spacing of light wavelengths. (A rainbow or glass prism produces an equal spacing of wavenumbers, which compresses the "blue" end of the spectrum.) Outside the visible range, electromagnetic radiation at higher energies (wavelengths shorter than 380 nanometers) is called ultraviolet and includes xrays and gamma rays. Lower energy radiation (at wavelengths longer than about 800 nm) is called infrared or heat at still lower frequencies (longer wavelengths) are microwaves, television waves and radio waves.

Notice the very gradual falloff in luminosity at the near infrared (IR) end of the spectrum, and the relatively sharper falloff toward ultraviolet (UV). At the earth's surface, the absorbing effects of the the ozone layer and lower atmosphere significantly filter short wavelength radiation below 450 nm and block all radiation below 320 nm. In addition, most wavelengths below 500 nm are blocked from reaching the retina by the transparent yellow tint in the adult lens and a protective yellow pigment layer on the retina. But in noon daylight there is as much energy in long wavelength (heat) radiation as there is in light, so the gradual falloff in perceptible "red" light is due to weaker visual sensitivity in longer wavelengths.

Thus, the range of light wavelengths is somewhat arbitrary. Photometric standards for the visible wavelengths at daylight levels of illumination are from 360 nm at the near ultraviolet end to 830 nm at the near infrared. However under normal viewing conditions, the effective visual limits are between 400 nm to 700 nm, as shown in most diagrams on this site. Yet it is possible to see wavelengths down to 380 nm or up to 900 nm if the light is bright enough or viewed in near darkness.

Within the spectrum, the spectral hues do not have clear boundaries, but appear to shade continuously from one hue to the next across color bands of unequal width. It is easier to locate the center of these color categories than the edges the approximate wavelength location of basic color categories (including cyan or blue green) is shown in the figure (above). Note the fairly sharp transitions from "blue" to "cyan" and from "green" to "yellow," and the narrow span of "cyan" and "yellow" (which can appear white in a rainbow).

I use quotation marks to refer to spectral colors because light itself has no color. Color is fundamentally a complex judgment experienced as a sensation. It is not an objective feature of the physical world — but it is not an illusion, either. A single wavelength of the spectrum or monochromatic light, seen as an isolated, bright light in a dark surround, creates the perception of a recognizable hue but the same light wavelength can change color if it is viewed in a different context. For example, long wavelength or "red" light can, in the right setting, appear red, scarlet, crimson, pink, maroon, brown, gray or even black! Similarly, in all the diagrams or illustrations of color vision (including the chromaticity diagram), spectrum colors are only symbols for the different wavelengths of light.

Despite all that, the abstract wavelength numbers are conveniently made more interpretable by the use of standard hue categories. I've summarized below the hue terminology adopted throughout this site. It uses the six primary hue categories red, orange, yellow, green, blue and violet, with blends indicated by compound names in which the first hue is a tint or bias in the second hue: blue violet indicates a violet leaning toward blue.

Note: Hue boundaries rounded to the nearest 10 nm. Spectral hue boundaries are arbitrary, due to the gradual blending of one hue into the next, the shifts in hue boundaries produced by luminance changes, individual differences in color perception, and language variation in the number and meaning of hue categories. "c" means "complement of" [wavelength] for extraspectral hues (mixtures of "orange red" and "blue violet" light).
Sources: complementary hues from Wyszecki & Stiles (1982) hue boundaries from my own CIECAM spectral hue scaling of watercolor pigments, Munsell hue categories and spectral wavelengths.

These labels are only guidelines. In addition to the context factors mentioned above, the hue boundaries will appear to shift as the luminance (brightness) of the spectrum increases individual differences in color perception create substantial disagreement in the location of color boundaries, or the location of "pure" colors such as the unique hues (red, yellow, green and blue) and the location of boundaries will change with the number of hue categories used and the meaning assigned to them (especially across different languages or cultures).

Variations in Natural Light . Radiation from the sun that does get through the atmosphere and is visible to our eyes can be described in three ways:

• Sunlight is light coming directly from the sun — the image of the sun's disk or a shaft of sunlight into a darkened room. The solar color changes significantly across the day and depends on the angle of the sun above the horizon, the altitude of the viewer above sea level, the season, the geographical location and the amount of water vapor, dust and smoke in the air. The sun itself is so brilliant that it overwhelms color vision, making color judgments unreliable, but if the noon sun were dimmed sufficiently, its color would appear a pale greenish yellow (not the deep yellow of schoolroom paintings). This color appears in the positive afterimage of sunlight reflected off a car windshield.

• Skylight refers to the blue light of the sky as viewed from a location in complete shade, for example the light entering through a north facing window. It results from the scattering of short wavelength light by air molecules. This scattering is slightly stronger from the northern sky, opposite the generally southern origin of sunlight. The illuminance contribution of skylight is significant: though much dimmer than the sun's disk, the visible area of the sky is approximately 100,000 times larger, which is why daylight shadows are clearly illuminated and we can read a summer novel in deep shade.

• Daylight is the combined light of sun and sky, for example as reflected from an unshaded sheet of white paper illuminated outdoors. Significant color shifts occur in daylight, depending on geography, season and time of day, but it is unchanged by scattered clouds or overcast: these only dim the light and diffuse it.

The most accurate way to describe these color changes in natural light is by means of a spectral power distribution (SPD). This is a measurement of the radiance or radiant power (energy per second) of light within a small interval of the spectrum (such as 570-575 nm for wavelengths). Usually the power within each wavelength or wavenumber interval is shown as a proportion of a standard wavelength or maximum power, given an arbitrary value of 100, which creates a relative SPD.

Many relative SPDs have been published as standard illuminants, which are spectral templates used to model the characteristics of natural light and to describe artificial light sources and light filters. Two of these, the standard noon daylight illuminant (D65) and noon sunlight illuminant (D55), are shown below, along with the SPDs for north skylight and sunset light. (The numbers associated with the illuminants indicate the closest matching correlated color temperature of the light.)

spectral variations in natural light

standardized relative spectral power distributions for daylight phases across the visible spectrum, normalized to equal power at 560 nm, with the correlated color temperature (CCT) of each profile (Wyszecki & Stiles, 1982)

The most interesting of these templates is D65, the noon daylight illuminant. This SPD is useful in color vision research because it is perceived as a balanced "white" illumination across a wide range of illumination levels — rain or shine, we perceive daylight as "white" light, provided the sun is not near the horizon. This is the first of many facts that confirm our color vision is not an abstract and impartial color sensor but is a living system that anticipates the range of natural surface colors as they appear under natural illumination.

The noon north skylight illuminant is in contrast strongly skewed toward the "blue" wavelengths, and this "in the shade" illumination appears distinctly blue to the eye. Noon sunlight (D55) has a nearly flat distribution and appears to be a yellowish or pinkish white when the eye is adapted to noon daylight.

When the sun is lower in the sky at sunrise or sunset, sunlight must pass sideways through a much longer and denser section of the earth's atmosphere, which scatters most of the "blue" and "green" wavelengths to produce a distinctly yellow or red hue. (Sunlight is also reddened by dust storms, ash from volcanic eruptions or the smoke from large fires.) This lends morning or late afternoon light a strong yellow or red bias, climaxing in the deep orange of sunrise or sunset. Morning light has a softer, rosier color, in part because the cooler night air has a higher relative humidity that produces long wavelength filtering morning fogs or mists, and in part because the drop in temperature abates daytime winds and convection currents, allowing dust and smoke to settle out of the atmosphere.

Thus, the illumination that reveals our world is not constant but varies across a broad swath of tints from cool blues to warm yellows and reds. The eye is adapted to minimize the distorting effect that these color changes in the light have on the color appearance of objects.

Finally, the eye is adapted to function across illumination levels from 0.001 lux (starry night) to more than 100,000 lux (noon daylight), which makes us functional day or night. Moonlight has the same spectral power distribution as daylight, though much reduced in intensity, so the D65 illuminant stands for the daylight and nighttime extremes of natural light experience. However, our color experience of light and objects changes dramatically within that illumination range, as the eye changes from trichromatic photopic vision to monochromatic scotopic vision.

Design of the eye

The eye is a marvel of biological adaptation to a specific function. In large part this adaptation is successful because it separates visual tasks into four levels of structure: the optical eye, the retina, the photoreceptor cells, and the photopigment molecules.

"red" light has a longer wavelength (lower frequency) than "blue" light

The Optical Eye . At the largest scale, the eye is essentially a camera (diagram at right, top), equipped with a lens to focus light onto a photosensitive surface in its dark interior, in the same way a camera focuses light onto film.

The camera analogy primarily applies to the eye's front end. The lens and its transparent covering, the cornea, act as a compound lens to focus the image. The cornea's optical shape is maintained by gentle internal pressure from the aqueous humor between the cornea and lens. The cornea does most of the work in focusing light, as we discover when we swim underwater without a face mask. (Light is refracted or focused by the change in density between air and the liquid filled cornea immersing the eye in water eliminates this light bending difference.) Because it is essential to vision, the cornea is protected by the bony brow, nose and cheek ridges nearby, and the eye's extreme sensitivity to touch.

The lens is a flexible, transparent body, with a naturally rounded shape that is stretched along its front surface, like a trampoline canvas in its frame, by the constant tension of zonule fibers around its circumference. This tension flattens the lens for midrange and distance vision. To focus on nearby objects (closer than 5 meters), the ciliary muscles encircling the lens contract to close the opening spanned by the zonule fibers, which slackens the tension around the lens and allows it to resume its rounded shape, shortening the focal length. As people grow older, the lens hardens and does not return to its rounded shape when the ciliary muscles contract, producing the age related farsightedness called presbyopia.

The aperture into the eye or pupil is fringed by a light sensitive iris, spread over the front of the lens, which acts as a diaphragm to adjust the pupil from a minimum diameter of 2mm up to a maximum of 5mm (in the elderly) to 8mm (in young adults). This produces a change in pupil area from about 3.5 mm 2 to 20 to 35 mm 2 , which provides an 87% to 95% reduction in the amount of light entering the eye. However, this represents a tiny fraction of the total range of illmination the eye can handle. Additional changes in luminance adaptation occur in the retina and brain across a span of several minutes the iris makes prompt, momentary adjustments to changes in light intensity within the same light environment.

The pupillary reflex is controlled by intrinsically photosensitive retinal ganglion cells (ipRGCs) that contain an invertebrate photopigment (melanopsin). These cells respond to light sluggishly, are less sensitive to variations in light (although they do adapt to light), and connect directly to thalamic and brainstem visual centers. They do not contribute to the visual image but regulate important light reflexes — the circadian rhythm, contraction of the iris, and melatonin suppression.

The rest of the eye is wrapped in a tough external coating of translucent white sclera, which attaches by tendons to muscles that rotate the eye within the recessed bony eye socket. The rounded shape is maintained by internal pressure from the transparent, jellylike vitreous humor. (Excessive intraocular pressure can cause glaucoma or damage to the optic nerve.) The inner surface of the iris and sclera are covered with a pigmented, black membrane (the choroid) that (1) prevents unfocused light from shining through the sides of the eye, and (2) prevents light that enters the pupil from reflecting inside of the eye. This pigment is lacking in albinos, causing light that is tinged the red of retinal blood vessels to reflect back out through the pupil.

three levels of structure in the eye

Prereceptoral Filtering. Several parts of the eye act as filters to block short wavelength "violet" light from reaching the retina. The most important of these are the cornea, lens, and macular pigment (right).

The cornea is nearly transparent to light except at very short wavelengths, where it filters up to 40% of incident light.

The lens is the principal source of prereceptoral filtering. Colorless at birth, it gradually yellows and darkens with age: the lens of an 80 year old filters out approximately twice as much short wavelength light as the lens of a 20 year old. In an adult the lens blocks at least 25% of incoming light at wavelengths below 450 nm and 50% or more at wavelengths below 430 nm. Removal of the lens in a cataract operation causes a significant increase in light sensitivity below 400 nm, called aphakic vision.

Finally, the fovea is veiled by a small patch of yellow macular pigment, which appears as a slight darkening of healthy retinal tissue. The macular pigment filters out 25% or more of light between 430 nm to 500 nm.

In the average adult, the combined ocular media screen out half or more of incident light at wavelengths below 490 nm and nearly all light below 400 nm. However, prereceptoral filtering varies significantly across age or ethnic groups and across individuals within any group. It has the largest effect under intense light (small pupil size and high photopigment bleaching).

Prereceptoral filtering is important because it reduces chromatic aberration in the foveal image, and shields the delicate retinal cells from the extremely damaging effects of near ultraviolet light — the same wavelengths that sunburn skin and fade inks or paints.

Retina . The optics of the eye serve one purpose: to focus an image on the light sensitive retina, a paper thin layer of nerve tissue covering most of the inner surface of the eye (middle diagram at right, above). With a surface area the size of a silver dollar, and a total volume no larger than a pea, the retina is nothing like photographic film. It is actually an incredibly compact and powerful computer, a dense network of about 200 million layered and highly specialized nerve cells. About half of these are photoreceptor cells that capture the light information the other half are secondary cells that integrate and recode the photoreceptor outputs before sending them on to the brain. The retina is also woven throughout with tiny blood vessels that nourish the continuously active retinal tissue and give it the red color we sometimes see staring back at us in flash photographs.

prereceptoral filtering in an adult eye

adapted from Packer & Willams (2003)

The visual center of the eye is marked by a slight depression in the retina less than 2mm wide, the fovea centralis (the Latin fovea means "small pit"). The fovea specializes in the perception of contrasty edges at an extremely high level of detail within a visual field approximately 1.5° wide — the width of three full moons or of a US quarter held 34 inches from the eye. The fovea is slightly displaced from the visual axis (the focal point behind the lens) away from the nasal side of each eye, which locates the fovea under the image of relatively close, centrally fixated objects.

The foveal pit is created by thinning and spreading apart the synaptic bodies, secondary cells and retinal support cells (nerves and blood vessels) that form a blanket of tissue over the photoreceptors. This enhances image clarity and partly shields the fovea from light scattered inside the eye.

Neighboring photoreceptors throughout the retina, but especially in the fovea, interact with each other to interpret color and contrast in the optical image, via the secondary cells with "colorful" names such as midget ganglion and parasol cell. These connector cells group cones and rods into center/surround receptive fields that sharpen edges and contrast based on the relative proportions of stimulation received by all the cells in a group. Finally, the secondary cells transform outputs from the three classes of cone into contrasting opponent channels of color and luminance information. These processing steps occur in the retina, rather than in the brain, because the transformed opponent signals can be transmitted through the optic nerve with much less "noise" or error than the individual cone outputs.

Signals from the secondary cells are transmitted through individual nerve tracts that are bundled together as the optic nerve, which exits the eye (along with internal blood vessels) through a hole in the retina and sclera. This creates the optic disc or blind spot, a point in the visual field where there are no photoreceptors.

recognizing the blind spot

cover your left eye with one hand, then move your head toward and away from your computer screen, at a distance of about 12 to 14 inches, while staring at the "X" at left the blind spot will cause the black dot at right to disappear from view

The blind spot occurs in the peripheral visual field, where visual acuity is very poor. It is not normally noticeable because the mind fills in and completes forms we glimpse in peripheral vision, and in this process fills in the vacant area: outside the foveal field, vision is more of a cognitive construction than an optical report.

Photoreceptor Cells . Vision begins at the third level of scale, the photoreceptor cells. These light receptors are easily the most complex sensory cells we have. There are two basic types (illustrated above): the roughly 100 million rods adapted for dim light and night vision, and the 6 million or so cones that perceive daylight luminance, contrast and color. The cones in turn come in three types or spectral classes, discussed below.

The names rod and cone originated in early anatomical studies, when it was noted that the outer segments of the cell types tended to have a cylindrical or conical shape. In fact most photoreceptors are difficult to classify by appearance alone and their form changes depending on their location in the retina.

Both cell types have essentially the same structure. The cell body contains the nucleus and metabolic functions, which supports an outer segment containing around 1000 separate layers of fat molecules (formed as separate disks in the rods or as folds of a single membrane in the cones) embedded in each layer are up to 10,000 light sensitive photopigment molecules. An inner segment between the cell body and outer segment continually produces new photopigment molecules and passes them into newly formed layers of the outer segment. The layers slowly migrate from the junction with the inner segment to the tip of the outer segment: the photoreceptors grow continuously, like hair. At the opposite end of the cell, electrical impulses created when light strikes the photopigments are sent from the synaptic body to the retina's neural network.

All photoreceptor cells are arranged with the tips of the light sensitive outer segments resting against a thin pigment epithelium covering the inner surface of the choroid. The choroid's dense black pigment prevents light from reflecting back into the photoreceptors, and its smooth surface allows even alignment of the receptor tips where an image can be precisely focused. The cells of the pigment epithelium also break down the oldest layers of the outer segment as these grow into it.

To preserve the alignment against the eye wall, the irregular and relatively thick layers of photoreceptor cell bodies, secondary cells, nerve connections and retinal blood vessels are layered over the cones rather than underneath them. We don't see the shadows cast by this tissue layer because it never moves, and constant stimuli are completely filtered from images by the brain. (The reason we can see motes in our eyes is because these float in the vitreous humor.)

ophthalmologist's view of the back of the eye (fundus)

The distribution of photoreceptor cells varies considerably from the center to edge of the retina and is different for cones and rods (diagram at right). The central fovea consists of the foveola, roughly 0.35° visual angle (0.1 millimeters) wide, containing a central bouquet of just two classes of cones (L and M, described below), shown as a red spike in the diagram roughly half of all L and M cones in the retina are packed into the foveal area.

Compared to cones in the peripheral retina, the foveal cones are extremely thin and elongated — they might better be called "straws" than cones (diagram right). The thinness makes possible an extremely dense, approximately hexagonal packing of cones (side by side), and eases somewhat the need for precise focusing in depth (front to back). There are anywhere from 160,000 to 250,000 cones per square millimeter in the central fovea — an average cell spacing of about 2.5 to 2 microns, which equals the eye's maximum possible optical resolution. This density rapidly thins to about 50,000 cones per square millimeter at the foveal border (roughly 1° visual angle or 0.3 mm from the center), then declines steadily to about 5,000 cones per square millimeter in the retinal periphery.

The cylindrical length of the foveal cone outer segment is aligned exactly parallel to the direction of incoming light from the center of the lens, which increases the probability that one of these photons will strike one of the 10 million or so photopigment molecules each photoreceptor contains. As a result the foveal photoreceptors are over four times more sensitive to light arriving from the center of the pupil opening than obliquely from the outer edge (the Stiles Crawford effect). This mutes the effect of random light scattering inside the eye or between cones, and somewhat reduces the sideways "blue" chromatic aberration in the image. In addition, the photoreceptor cell walls appear to act as a waveguide, channeling light waves along the cell length.

The third class of daylight photoreceptor, the S cones, is completely absent from the central fovea but rapidly increases toward the foveal border, until it reaches a peak density of over 2000 cones per square millimeter at about 1° visual angle (0.3 mm) from the foveal center. Then the density of S cones declines to about 500 per square millimeter in the peripheral retina.

All the peripheral cones become fatter and less densely packed, and the spacing between them increases, along the sides of the eye. This reduces optical resolution and light sensitivity where the curved surface of the retina makes it impossible for the lens to provide a crisply focused image. The cells respond instead to movement and to luminance or color contrast our mind supplies the appearance of peripheral shapes in space.

The dim light sensitive rods are excluded from the fovea (out to about 1° visual angle from the center), which causes roughly a 2° wide central blind spot under very low light levels. (This can be seen if you try to read a newspaper headline under moonlight.) However the rods are closely packed between the extrafoveal cones, and this packing reaches a peak density of up to 150,000 rods per square millimeter in a ring at about 5 mm (17° visual angle) from the center of the fovea. (This can be seen at night by looking to one side of a very dim star or nebula the object becomes significantly brighter.) The density of rods then declines steadily to about 80,000 per square millimeter in the extreme periphery. The density of both rods and cones is slightly higher on the nasal side of the retina, to provide enhanced peripheral vision. The diagram also shows the location of the blind spot where both rods and cones are absent.

Photopigment Molecules . In the outer segment of the photoreceptor cells is the fourth and smallest level of scale, the photopigment molecules. These are the actual transducers of light, the mechanism that translates light energy into biological response.

Each photoreceptor generates a baseline signal through the continuous transport of sodium ions (Na+) out of the inner segment of the cell and the import of potassium ions (K+) from outside. At the same time, sodium ions can enter the outer segment via small pores. The resulting ion imbalance produces a small, steady electric current of about 㫀 millivolts across the cell body when it is not exposed to light. As a result, the rods and cones produce a continuous signal (the dark current) even when they are not stimulated. To create a nerve impulse, light reduces this baseline photoreceptor current.

distribution and size of
photoreceptors in the retina

left eye shown (top) cone and rod densities by perimetric angle from optical axis (bottom) relative dimensions of cones and rods
(from Wyszecki & Stiles, 1982)

Each photopigment molecule consists of two parts: a short chromophore (light sensitive) molecule derived from vitamin A (retinal) and a gangly protein backbone (opsin) consisting of seven helical structures joined as a long string. The helices are folded into a single fatty layer of the outer segment, forming a gap the chromophore is able to fit inside and attach to this opening because it assumes a specific molecular shape (11-cis retinal). This complex of retinal and opsin molecules is generically called rhodopsin (right).

When the chromophore is struck by a photon of the right quantum energy or wavelength, it instantly changes shape (a photoisomerization to all-trans retinal) which detaches it from the backbone. As a result the opsin molecule also changes shape and in its new form acts as a catalyst to other enzymes in the outer segment, which briefly close the nearby sodium ion (Na+) pores of the cone outer membrane. This changes the baseline electrical current across the cell body, which alters the concentration of a neurotransmitter (glutamate) between the cone synaptic body and the retinal secondary cells. As the number of absorbed photons increases, the secretion of neurotransmitter decreases.

Rhodopsins strongly absorb light, which gives them a characteristic dark, opaque color called visual purple. The photoisomerization causes a bleaching from purple to transparent yellow, which allows light to pass deeper into the outer segment and strike photopigment molecules below. After photoisomerization, the pigment is regenerated or reassembled from the bleached components. Less than 1% of the total amount of bleached photopigment in a cone is regenerated each second yet even during daylight, only about 50% of the photopigment is bleached at any time.

This is the transduction process by which the retina translates light energy into nerve impulses: light strikes the chromophore, which detaches from the photopigment, which transforms the backbone, which causes an enzyme chemical cascade, which changes the ion permeability of the cell walls, which alters the electric charge of the cell, which alters its baseline synaptic activity, which changes the pattern of activity among secondary cells in the retina. The entire sequence, from photon absorption to nerve output, is completed within 50 microseconds (millionths of a second).

At the fovea, human vision performs near the theoretical limits for an optical system. A person with normal eyesight under daylight illumination can distinguish the separate lines in a grating of black and white lines 1mm wide viewed from a distance of 7 meters. The finest camera of similar aperture and focal length cannot do any better.

Outside the fovea, the eye gets by with crude optics and a yellowed, chromatically aberrated and badly focused image. These shortcomings are compensated for by the complex structure of the visual field, by continual, exploratory eye movements, and by extensive image enhancement and cognitive interpretation (including memory) applied of the flow of images from the eye. Vision requires two eyes and two thirds of our brain to function as effectively as it does.

This section relies in particular on the chapter "Light, the Retinal Image, and Photoreceptors" by Orrin Packer and David Williams in The Science of Color (2nd ed.) edited by Steven Shevell (Optical Society of America, 2003).

Three plus one light receptors

Now let's examine the unique properties of the three classes of daylight color photoreceptors, the cones, and the dim light photoreceptors, the rods.

Four Types of Photopigment . Each class of photoreceptor contains a specific type of rhodopsin, which determines how the photoreceptor responds to light. In humans (and many vertebrates) the chromophore molecule is retinal, so all photopigment differences arise in the opsin backbone. This backbone determines the ability of the photopigment, and the photoreceptor cell that contains it, to respond to light.

All animal photopigments evolved from a common opsin ancestor and are chemically very similar. However different species, and different individuals within a species, carry genes for different amino acid sequences within the basic opsin backbone. These amino acid substitutions change the sensitivity of the photopigment to light.

three types of human opsin molecules

colored circles show the amino acid differences in molecular structure from previous molecule (S is compared to rhodopsin [not shown], M to S, and L to M)

The amino acid differences in the common opsin backbone of human photopigments are shown above. They identify four spectral classes of human photoreceptors:

three dimensional model of rhodopsin

after Pogozheva, Lomize & Mosberg (1997) and Saji (2003)

• scotopic or dim light adapted rods (denoted by V' and containing the photopigment rhodopsin), most sensitive to "green" wavelengths at around 505 nm

• short wavelength or S cones, containing cyanolabe and most sensitive to "blue violet" wavelengths at around 445 nm.

• medium wavelength or M cones, containing chlorolabe and most sensitive to "green" wavelengths at around 540 nm

• long wavelength or L cones, containing the photopigment erythrolabe and most sensitive to "greenish yellow" wavelengths at around 565 nm

As the figure shows, there is a large number of differences between rhodopsin (taken as baseline) and the S photopigment, and a similarly large number of differences between the S and M photopigments. In contrast, the M and L photopigments are nearly identical.

Photopigments do not catch light particles the way a bucket catches rain. Even if a photon strikes a photopigment molecule, the probability that the visual pigment will photoisomerize depends on the wavelength (energy) of the light. Each photopigment is most likely to react to light at its wavelength of peak sensitivity. Other wavelengths (or frequencies) also cause the photopigment to react, but this is less likely to happen and so requires on average a larger number of photons (greater light intensity) to occur.

Measuring Photoreceptor Light Sensitivity . The relationship between photopigment chemistry and light sensitivity was anticipated by 19th century visual researchers, and was demonstrated when the rod photopigment (then called visual purple) was extracted from dissected retinas and shown to bleach readily in light.

Over a century later, methods were developed to measure the bleaching of cone and rod photopigments to specific light wavelengths, which produces a relative sensitivity curve across the spectrum for each type of photopigment or cone. As the curve gets higher, the probability increases that the photopigment will be bleached (and the photoreceptor cell will respond) at that wavelength.

The photopigment absorption curves shown below were measured in about 150 intact cones from surgically removed human eyes, held in a tiny glass tube and illuminated from the side by a thin beam of monochromatic (single wavelength) light (a technique called microspectrophotometry). They closely resemble the recordings of single cones in monkey retinas and the absorption curves of genetically manufactured photopigment molecules. These curves have been normalized — the sensitivity at each wavelength is expressed as a proportion of the maximum sensitivity, which is assigned a value of 1.0.

human photopigment absorption curves

curves normalized to equal peak absorptance (1.0) on a linear vertical scale wavelength of peak absorptance in italics, number of photoreceptors measured for each curve at base of curve data from Dartnall, Bowmaker & Mollon (1983)

Four kinds of spectra were obtained with four distinct absorptance peaks at 420, 495, 530 and 560 nm. As expected from the photopigment molecular structure, the L and M photopigments have a similar peak and span within the spectrum, and both differ significantly from the location of the S cone curve. The fourth (rod) photopigment, rhodopsin, fits in between.

The photoisomerization curves should describe color vision if each type of cone contains only one kind of photopigment and if the intensity of the photoreceptor response is proportional to the quantity of photoisomerized pigment. In fact, these curves do not correspond to human color matching responses, especially for the L cones. So we have to shift attention to the cones as they respond in an intact (living) retina.

It is impractical to measure cone responses in live human retinas, so methods have been contrived since the late 19th century to measure them by indirect methods. Within the past few decades, genetic identification of the specific opsin types expressed in individual photopigments has produced an increasingly accurate picture.

The most reliable data on cone sensitivity curves or cone fundamentals actually comes from a different experimental method, also first used in the 19th century: color matching experiments. In this approach, viewers match the color of a test wavelength by a mixture of three "primary" lights. These matches are performed by normal trichromats and by carefully screened dichromats ("colorblind" subjects) who lack one of the L, M or S photopigments entirely or carry L and M photopigments that are very similar. Differences between the dichromats allow measurement of either the L or M light response without any contribution from the other type of cone. These measurements are then used to transform the color matching functions of normal subjects into cone fundamentals that match the separate curves found in dichromats. Adjustments must also be made to compensate for the prereceptoral filtering of short wavelength light by the lens and macular pigment. Alternate techniques, including measurement of nerve signals from individual human cones or rhesus monkey cones, have been used to confirm and clarify the color matching data.

Because the cones are unequally distributed across the retina, it matters how large (visual angle) and where (centrally or peripherally) the color stimulus appears in the visual field — different presentations of color stimuli will produce different color matching functions. The standard alternatives are a 2° (foveal) or 10° (wide field) presentation of color areas centered in the field of view. Compared to 2° curves, the 10° L and M curves are elevated by 10% to 40% in the "green" to "violet" wavelengths. For reasons explained here, the 10° curves are used throughout this site.

The cone fundamentals seem to imply that cone sensitivities are fixed, like the speed of a photographic film. They are not: cone sensitivity depends on light intensity, and the curves below describe the average response under moderate levels of retinal illuminance. The absolute level of all sensitivities changes as part of light adaptation, and the relative sensitivities change during chromatic adaptation.

Five Views of the Cone Fundamentals . Let's now look at five types of presentation using recent estimates of 10 degree, quantal cone fundamentals by Andrew Stockman and Lindsay Sharpe (2000).

1. Linear Normalized Cone Fundamentals. This is the most common textbook presentation of cone response sensitivities. The response at each wavelength is shown as a proportion of the peak response, which is set equal to 1.0 on a linear vertical scale. This produces the three similar (but not identical) curves shown below.

normalized cone sensitivity functions

the Stockman & Sharpe (2000) 10° quantal cone fundamentals, normalized to equal peak values of 1.0 on a linear vertical scale

This presentation is in some respects misleading, because it distorts the functional relationships between light wavelength (energy), cone sensitivity and color perception. However, comparison with the photopigment absorption curves above identifies three obvious differences between the shape and peak sensitivity of the photopigment and cone fundamentals:

• The L cone has a noticeably broader or wider curve than the S and M cones the S cone has a narrower response profile than either the M or L cones.

• Compared to the photopigments, the cone peak sensitivities have been shifted toward long wavelengths, by 5 nm (L cone) to 25 nm (S cone).

• The short wavelength "tails" of the photopigment curves have been lowered so that the response below 500 nm falls toward zero. As the effects of prereceptoral filtering have been removed, this implies that increased S outputs are opposed to the L and M outputs: at short wavelengths, the S cone suppresses the L and M cone sensitivity.

Overall, human spectral sensitivity is split into two parts: a peaked short wavelength sensitivity centered on "blue violet" (445 nm), and a broad long wavelength sensitivity centered around "yellow green" (

560 nm), with a trough of minimum sensitivity in "middle blue" (475 to 485 nm).

2. Log Normalized Cone Fundamentals. A problem with the linear normalized cone fundamentals is that they emphasize the overall "peak" shape of the curves as a result they do not adequately display the "tails" or extreme low values. The solution is to present the normalized curves on a log vertical scale, as shown below. Each unit of the log sensitivity scale is 10 times smaller than the unit before, which "zooms in" on the very low sensitivities. (Cone fundamentals are most often tablulated in log normalized form, as it is easy to convert these curves into any other format.)

log normalized cone sensitivity functions

the Stockman & Sharpe (2000) 10° quantal cone fundamentals, normalized to equal peak values of 1.0 on a log vertical scale

These curves provide three additional insights:

• Each type of cone responds to a wide range of light wavelengths in fact, the measurable sensitivity of the L and M cones extends over the entire visible spectrum, although the sensitivity of the M cone is very low in the near infrared.

• The L and M response curves largely overlap one another — and this overlap significantly limits the maximum saturation of hues in the "yellow" through "green" wavelengths.

• The S cone responds to only half the spectrum, from "yellow green" to "violet" perception of monochromatic "yellow green" to "red" hues depends entirely on the balance between L and M outputs.

3. Log Population Weighted Cone Fundamentals. The previous two formats imply that the different photopigments or cone spectral classes are represented in the retina in equal numbers. This is not true, which suggests the cone fundamentals should be weighted (shifted up or down in relation to each other) to more accurately represent their proportional contribution to color vision.

The peak values of each cone are set equal to the proportion of that cone in the total number of cones in the retina. The probability that a cone will respond is weighted by the probability that a photon will strike that cone type. The population proportions used here are approximately those of the 10° retinal anatomy: 63% L cones, 31% M cones, and 6% S cones. (There is reliable evidence that these proportions differ significantly from one person to the next.)

population weighted log cone sensitivity functions

the Stockman & Sharpe (2000) 10° quantal cone sensitivity functions on a log vertical scale (1.0 = total cumulative response by all three cones) area of 50% or more optical density of macular pigment and adult lens shown in yellow from Stockman, Sharpe & Fach (1999)

Taking into account both the individual response sensitivities of the three cone classes and their proportional numbers in the retina, we see that a random photon of equal energy or "white" light is most likely to produce a response in an L cone at any wavelength above 445 nm. In contrast, the M cones have only 40% of the L cone response probability across all wavelengths, and the S cones only one tenth of that. Thus, a single photon is roughly 25 times more likely to produce a response in an L than an S cone.

4. Linear Population Weighted Cone Fundamentals. The log scale is useful to show the very low values of cone fundamentals, in the "tails" of the curve, but it gives an unfamiliar view of the overall shape of the population weighted curves. Represented on a linear scale, the curves reveal the response probabilities more directly.

population weighted linear cone sensitivity functions

the Stockman & Sharpe (2000) 10° quantal cone fundamentals on a linear vertical scale, scaled to reflect L, M and S cone proportions in the retina (1.0 = total cumulative response by all three cones)

This is probably the most accurate picture of the proportional response probabilities of the three cone classes in relation to each other, to different wavelengths of light, and to our overall visual acuity. We glean a few more insights:

• The L and M cones produce nearly all the information acquired by the retina the L cones account for most of the retinal signal at nearly all wavelengths. When we recall that the fovea contains half the total number of L and M cones in the retina, the curves indicate the dominant importance of foveal responses in color vision.

• The linear scale emphasizes that each cone responds primarily to light near its wavelength of maximum sensitivity (the three white lines in the spectrum band). A single photon at a cone's peak sensitivity has a visual impact equivalent to 10,000 or more photons at the "tails" or low sensitivity ends of the curve.

• As a result, our eyes are most sensitive to "yellow green" wavelengths around the middle of the spectrum: yellows and greens are the most luminous colors in a prism spectrum or rainbow. In fact, most of our light sensitivity lies between 500 nm to 620 nm — roughly from "blue green" to "scarlet".

• The sensitivities of the L and M cones are well contrasted through the "red" to "yellow green" parts of the spectrum, but become very similar through the "blue green" to "blue violet" parts of the spectrum. The S cones break the tie in wavelengths below

5. Equal Area Cone Fundamentals. The population weighted curves are a physiological representation of visual sensitivity: the cones are weighted by their proportional numbers in the retina. But individual cone outputs have unequal importance or "voting strength" in determining color sensations because they flow into common pathways or channels of color information. These channels have a weight or importance of their own, which defines the perceptual importance of the cone classes in brightness and color perception.

A plausible assumption adopted in colorimetry is that each type of cone contributes equally in the perception of a pure white or achromatic color. This means that the cone fundamentals do not represent individual photoreceptors, but classes or types of cones as a group. Each class is given equal perceptual weight in the visual system.

In this type of display, the peak of each sensitivity curve is scaled up or down so that the area under each curve (equivalent to the total response sensitivity of each type of cone, pooled across all cones) is the same these curves are usually presented on a linear vertical scale.

equal area cone sensitivity functions

the Stockman & Sharpe (2000) 10° cone fundamentals rescaled so that the area under each curve equals 10 on a linear scale

These equal area cone sensitivity curves have an important and specific technical role in colorimetry: to model the changes in cone sensitivities that occur in each class of cone during chromatic adaptation.

This equal area presentation of cone fundamentals should not be confused with the most commonly used model of visual sensitivity based on equal area weighting: the curves of standard color matching functions. These are superficially similar to cone fundamentals, and can be derived from the cone fundamentals by a mathematical transformation, but they do not represent specific photoreceptors or color channels in the visual system.

The large differences in peak elevations (especially when compared to the population weighted cone fundamentals) imply that the S cone outputs must be heavily weighted in the visual system far out of proportion to their numbers in the retina. This turns out to be true. In addition, the proportionally small overlap between the S cone curve and the L and M curves implies that short wavelength ("violet") is handled as a separate chromatic channel and is perceptually the most chromatic or saturated.

We might also suspect that the L and M cones have a different functional role in color vision from the S cones, because they have very similar response profiles across the spectrum and lower response weights than the S cones. And this also is true: the L and M cones are responsible for brightness/lightness perception, provide extreme visual acuity, and respond more quickly to temporal or spatial changes in the image the S cones contribute primarily to chromatic (color) perceptions.

Photopic & scotopic vision

The separate L, M and S cone fundamentals do not directly answer a more basic question: what is the eye's overall sensitivity to light? How much radiant power must a light emit before we can see it? The answer still depends on the wavelength of the light, but it also depends on the total intensity or energy of the illumination — the difference between daylight and darkness.

Daylight (Photopic) Sensitivity . At illumination levels above 10 lux or so, corresponding to daylight levels of illuminance from twilight to noon daylight, the cones primarily define the luminance or brightness of a light or surface. This is photopic vision, and it is functioning whenever we see two or more different hues.

Photopic sensitivity was one of the first visual attributes that 19th century psychophysicists attempted to measure. A plausible early approach was heterochromatic brightness matching, in which viewers adjust the brightness (radiance) of a monochromatic (single wavelength) light until it visually matched the brightness of a "white" light standard after this was done across the entire spectrum, it yielded a curve of overall light sensitivity. Unfortunately, colored light is not perceived the same as "white" light: hue purity increases the sensation of brightness, making saturated lights appear brighter than a "white" light of equal luminance.

Various methods have been tried to get around the problem. The most reliable is flicker photometry, which cancels the chromatic component in a monochromatic light by flickering it on and off so rapidly that it appears to fuse into a steady, desaturated, half bright stimulus, which is then adjusted to the "white" light standard. The diagram shows results from six different measurement techniques, listed in the key in descending order of reliability, to show the extent of the problems.

a passel of photopic sensitivity functions

luminous efficiency measured using six different techniques (from Wyszecki & Stiles, 1982)

The underlying problem, it turns out, is not in the measurement but in the theory: the diagram is not a picture of measurement problems but of visual adaptability. The brightness sensation is a dynamic response by different types of photoreceptors adjusting to different visual contexts. Overall light sensitivity varies across different luminance levels, light mixtures and dominant colors it varies depending on how it is measured. So it cannot be pinned down as a single curve, as can be done with the four types of photoreceptors.

Even so, the curve is useful in many practical applications. So the international standards body for color measurement methods, the Commission Internationale de l'Eclairage (or CIE) cut the Gordion knot and adopted a photopic sensitivity function [denoted V(&lambda), which means "a curve of the luminous efficiency value V at each wavelength &lambda"]. This curve was based on early (up to 1924) sets of diverse 2° color matching functions weighted to reproduce, at the corresponding wavelengths, the apparent brightness of the three rgb primary lights used in the color matching studies. This standard curve locates the main light sensitivity in the "green" center of the spectrum between 500 nm to 610 nm, and places the peak photopic sensitivity at 555 nm — though as you see above, this peak is more of a plateau.

The V(&lambda) luminosity function equivalently represents the relative luminous efficacy of radiant energy, the light stimulating power of equal watts at each wavelength. In this guise it is the basis of modern photometry as deployed in photographic light meters and digital camera image sensors. An electronic sensor measures the radiance within small sections of the visible spectrum, then weights each section by its luminous efficacy the total across the spectrum matches to a good approximation the light's luminance or apparent brightness to the human eye when the light is viewed in isolation. Here is the curve on a log vertical scale, with its partner the scoptopic sensitivity function [denoted V'(&lambda) and discussed in the next section].

photopic & scoptopic sensitivity functions

CIE 1951 scotopic luminous efficiency and CIE 1964 wide field (10°) photopic luminous efficiency, relative to peak photopic sensitivity on log vertical scale relative peak sensitivities from Kaiser & Boynton (1996)

Unfortunately, photopic sensitivity has remained a moving target. All measurements include the prereceptoral filtering in brightness judgments, which complicates measurement in the "blue" and "violet" wavelengths. Subsequent research also showed that the CIE 1924 curve underestimated photopic sensitivity in the "blue" wavelengths, so the curve has been twice corrected — by Judd (1951) and Vos (1978). This modified Judd-Vos luminosity function is usually denoted VM(&lambda) — M for modified.

Meanwhile a consensus emerged that the S cones do not contribute substantially to brightness perception. This means the photopic luminosity function is more accurately defined as a weighted combination of the L and M cone sensitivity curves. Paradoxically, when more realistic corrections for lens and macular density are added, these newer estimates increase even further the estimated photopic sensitivity to short wavelength light, despite exclusion of the S cone from the curve.

The diagram below presents the updated 2° luminosity function [denoted V*(&lambda)] and the companion 10° luminosity function by Stockman & Sharpe, normalized on a linear scale. The modified 1978 Judd-Vos curve (VM&lambda) and the 1951 scotopic curve [V'(&lambda)] are included for comparison. The new curves are based on the Stockman & Sharpe L and M cone fundamentals that best fit flicker photometric data for 40 viewers the L cones have a 50% greater weight than the M cones and the S cones are given zero weight. These curves put the peak photopic sensitivity at 545 nm, but with a flattened peak due to the averaged values of variant L photopigments.

photopic & scoptopic luminosity functions

photopic functions based on the 2° and 10° quantal cone fundamentals of Sharpe, Stockman, Jagla & Jägle (2005), shown with the CIE 1951 scotopic function and the Judd-Vos 1978 photopic function all curves normalized to equal peak sensitivity on a linear vertical scale

These curves show that short wavelength or "blue" light has a greater luminous efficiency under scotopic than photopic vision. They show that differences among the photopic curves are confined almost entirely to the short wavelengths, and that there is a greater "blue" response in wide field than foveal color perception. Finally, they show that long wavelength or "red" light strongly stimulates the photopic function but the scotopic very little. This is why submariners and astronomers dark adapt under red light: it keeps the foveal cones functional (for detail vision) while dark adapting the rods. Both cones and rods respond near the maximum to "green" wavelengths, regardless of luminance. This is why emergency response vehicles are now painted a light yellow green instead of the traditional red — the yellow green is much easier to see, especially in dim light.

Dim Light (Scotopic) Sensitivity . The diagrams above also show the CIE 1951 scotopic sensitivity function [denoted V'(&lambda)] which describes human light sensitivity under dark adaptation at illuminances below 0.1 lux. This is approximately the amount of light available under a half moon at night.

At these very low illuminance levels the cones cannot respond to light and the rods completely define our visual experience. There are seven peculiarities of rod based or scotopic vision:

• All rods contain the same photopigment, rhodopsin, so rods lack the photopigment variety necessary for color vision. We become functionally colorblind except for isolated points of higher luminance, such as distant traffic lights or the planet Mars, that are bright enough to stimulate the cones.

• Rod signals do not transmit within separate nerve pathways: they feed into the same channels used by the cones. These channels are segregated into contrasting red/green and yellow/blue opponent contrasts. As a result, rods can produce faint color sensations — at very low chroma and lightness contrast.

• Peak sensitivity is around 505 nm or "blue green". As natural light around sunset is shifted toward red, the rods start to dark adapt while still exposed to long wavelength ("red") light.

• However, rod sensations of "white" usually appear faint blue (matching about 480 nm) and not blue green.

• There are roughly 100 million rods in the human retina, yet they are completely absent from the fovea at the center of the visual field where daylight visual resolution is highest. We cannot read even large print text under scotopic vision, or recognize very small objects, because the fovea shuts down.

• There are about 16 rods for every cone in the eye, but there are only about 1 million separate nerve pathways from each eye to the brain. This means that the average pathway must carry information from 6 cones and 100 rods! This pooling of so many rod outputs in a single signal considerably reduces scotopic visual resolution and means, despite their huge numbers, that rod visual acuity is only about 1/20th that of the cones.

• Like the cones, the rods are more widely spaced and larger in diameter toward the edges of the retina, but they also form a densely packed ring at about a 20° visual angle around the fovea. This is why we can see very faint stars or lights at night if we look to one side, rather than directly at, their location.

Mesopic Light Sensitivity . The rods strongly affect color perception under moderately low illumination by mixing with or tinting the color responses of the still active cones. This mesopic vision typically appears in illuminances from 0.1 to 10 lux, for example during the 45 minutes or so after sunset (for a viewer outdoors and shielded from artificial light).

Because the rod and cone outputs are pooled in shared nerve pathways in mesopic vision, the photopic luminosity curve shifts toward blue and long wavelength hues become darker. Bright yellow, ochre and umber fuse into a single grayish tan, greens and blues appear as a single "grue" (green blue) color, and reds become a warm, dark gray. This Purkinje shift (named for the Bohemian scientist who described it in 1825) is easiest to see in large areas of color extending outside the visual field of the rodless fovea. It is quite noticeable if you look at a familiar, brightly colored art print, hawaiian shirt or flower bed in fading twilight as your eyes become adapted to the dark.

In daylight illumination the rod signals are near maximum, which causes a ceiling effect that makes the rods insensitive to light contrast. (This is due to a response compression of the rod outputs, and not to photopigment depletion by light.) The rod signals therefore disappear in the same way cone outputs do in a ganzfeld effect.

Even so, rods remain active in daylight, even under very bright levels of illuminance. They can affect color appearance by a process called rod intrusion, which desaturates colors, especially at long ("red") wavelengths, in large field or peripheral vision, and under moderate to low levels of illumination.

The Color & Vision Research Laboratories at UC San Diego provide a comprehensive online library of data relating to photopigments, cone fundamentals, colorimetry and visual responses.

Trichromatic mixtures

The premise that millions of distinct colors can arise from the stimulation of three different color receptors is called the three color or trichromatic theory of color vision, first proposed in the 18th century. It is the foundation of modern colorimetry, the prediction of perceived color matches from the physical measurement of lights or surfaces.

The cone sensitivity curves show the probability that individual L, M or S cones will respond to light at different wavelengths. But they do not offer a very clear picture of how the cones work together or how the mind "triangulates" from the separate cone outputs to identify specific colors.

We obtain this picture by charting the proportion of cone outputs in the perception of a specific color. This results in a literal triangle, the trilinear mixing triangle, that contains all possible colors of light.

Any diagram that shows how the L, M and S cones combine to create color perceptions must also define a specific geometry of color. This geometry changes, depending on how the cone signals are combined. The relationships among cone outputs, the method for calculating the outputs that produce a specific color sensation, and the "shape" of color in the mind, are all aspects of the same problem.

Principle of Univariance . A key feature of photoreceptor signals is that they represent light as a contrast or change to a continuous baseline signal of about 㫀 millivolts, even in darkness (hence the name dark current). A change in photoreceptor excitation is transmitted as a more or less change in this baseline signal. This single type of photoreceptor response to any and all light stimulation is termed the principle of univariance.

Now, the rate of photoisomerization in the photopigment depends on two completely separate dimensions of the light stimulus: (1) the quantity of light incident on the retina or (2) the relative sensitivity of the photopigment to the light wavelength(s).

As a result, a change in the photoreceptor signal can be caused by two very different changes in the light. The cone or rod output decreases as the light gets brighter or as the light frequency gets closer to the frequency of its peak sensitivity and the output increases as the light becomes dimmer or farther from its peak sensitivity.

the principle of univariance

a single L cone responds with the same "more" or "less" signals to changes in light frequency or light intensity

Thus there are two kinds of ambiguity in the response of individual photoreceptors to light (diagram, above):

• A single type of cone cannot distinguish changes in wavelength (hue) from changes in radiance (intensity). Alternating between equally bright "green" and "red" wavelengths (B), or modulating a single "green" wavelength of light between bright and dim (C), will produce an identical change in the output of a solitary L cone.

• Some changes in light produce no cone response. Alternating between equally bright "blue" and "orange" wavelengths (A), or between a dim "green" and proportionately bright "red" light (D), would not change the output of that solitary L cone.

However, what the cones cannot do individually they can achieve as a team. The principle of univariance means that color must be defined by the combined response of all three cone types.

The Cone Excitation Space . To visualize the color creating relationships among the separate L, M and S cones, they are used to define a three dimensional space. In this cone excitation space each dimension represents the separate and independent excitation or outputs produced in each type of cone.

The standard method to illustrate cone behavior is to combine the three cone responses produced by monochromatic lights from short (390 nm) to long (750 nm) wavelengths. This is done by plotting the cone fundamentals at each wavelength as points in the cone excitation space.

For example (diagram, right), at a wavelength of 500 nm, the L cone sensitivity is 0.44, M is 0.64 and S is 0.09. Those three numbers locate the combined cone excitation to monochromatic light at 500 nm. When similar points are plotted for all visible wavelengths, they define a curved path of cone excitations to monochromatic (maximally saturated) lights called the spectrum locus.

a normalized cone excitation space

the spectrum locus (red dots) plotted in three dimensions defined by the normalized cone fundamentals L, M and S V is the photopic luminous efficiency function

We still can recognize in this curve the basic features of the normalized L, M and S cone fundamentals. All points at wavelengths below

700 nm are at the origin (0 on all dimensions) which means those wavelengths are invisible — they produce no cone excitations. The L cone reaches its maximum response at around 565 nm, the M cone at around 540 nm, and the S cone at around 445 nm. But now we see them in dynamic combination. The V* luminosity function (green line in diagram) is the sum L+M of the normalized L and M outputs, so it forms a diagonal from the origin and in the L,M plane. The contrast between the L and M outputs (L–M) forms the opposing diagonal.

We can also find the location of any complex color, if we know the cone excitations it produces. The white point (wp) produced by an equal energy illuminant is found as the total area under each cone fundamental divided by the sum of all three fundamental areas: L = 0.44, M = 0.37, and S = 0.19. The extraspectral mixtures of "red" and "violet" extend as a line between the "red" and "violet" lights used to mix them — in the diagram, between 620 nm and 445 nm.

reading cone responses to a
500 nm monochromatic light

If we turn this diagram to look at it sideways, we see that the boundary of color space is geometrically irregular. No simple geometrical shape can describe the spectrum locus. It forms a roughly elliptical outline when viewed from one side (diagram, right) but a double lobed or pinched shape when viewed along the luminance diagonal (diagram, above). This double lobed shape reappears in the spectrum locus of color models, such as CIELAB or CIECAM, that contrast colors with a white surface.

This double lobed shape occurs because the spectrum locus is bent at a 90° angle along a line from the origin to approximately 525 nm ("middle green", purple line in diagrams above and right). The vertical half, comprising all wavelengths below 525 nm that produce a significant S cone response, sticks upward like a shark fin. The rest of the spectrum locus — wavelengths above 525 nm where the S cone response is effectively zero — lies completely flat on the L,M plane.

As a result of this bend, the color space is inherently curved. For example, mixing the wavelengths 575 nm and 475 nm in the right proportions will produce a "white" light. Therefore the mixing line between them must pass through the achromatic white point: but this can only be done with a curve (diagram, right). Moreover, the shape of this curve changes as we mix different pairs of complementary wavelengths. The curvature of chromaticity plane stretched inside the closed spectrum locus is geometrically irregular, too.

The diagram demonstrates how the L and M cones operate in tandem to define luminance. For lights below 525 nm, luminance is defined by the projection of the spectrum locus into the L,M plane (dotted line in diagram, right). But if we set aside luminance perception defined by the L+M diagonal, then color perception is divided into two parts:

• at wavelengths above 525 nm, changes in the relative excitation of the L and M cones define the color response the S cones are silent.

• at wavelengths below 525 nm, the relative L,M excitations are approximately the same as they are at 525 nm (the dotted line and purple line are equivalent) so it is the relative excitation of the independent S cone that defines the color response.

This two part geometry is the photoreceptor foundation for the opponent geometry of color appearance.

A final observation is that the white point is not located on the luminosity function. This simply demonstrates that white is not the same as bright. The perception of white is a form of color sensation, whereas the perception of bright is a unique intensity sensation. The cone excitation space implies that a "bright" stimulus produces more than two times the cone excitation of a "white" surface, and therefore visual "white" always has a lower luminosity than visual "bright" under the same viewing conditions.

The Chromaticity Plane . Reweighting the cone fundamentals, for example by doubling the M cone response or by using equal area or population weighted cone fundamentals, changes the relative length of the dimensions but does not alter the fundamentally curved and irregular geometry of the cone excitation space.  

However, we get a radically different color space through a different approach. By removing variations in the brightness of different wavelengths, we flatten the curvature of the three dimensional spectrum locus. This is done by normalizing on the total cone excitation, or dividing the excitation in each cone by the stimulation produced in all cones:

brightness (B) = L + M + S

If we divide the excitation produced in each cone type (Lc, Mc and Sc) by this amount, we get the relative proportion of the color sensation that is separately contributed by each of the three cone types. This is the chromaticity of the color:

chromaticity (C) = L/B, M/B, S/B.

The previous example only described a single wavelength of light, so we simply sum the cone excitations at that single "blue green" wavelength:

brightness&lambda=500 =ـ.64 + 0.44 + 0.09
= 1.17
chromaticity&lambda=500 =ـ.64/1.17, 0.44/1.17, 0.09/1.17
 0.54, 0.38, 0.08

Note that standard of brightness here is panchromatic because it includes all three cones. We do not divide by the luminosity function (V), which is a weighted sum of L+M without the S cones.

side view of the cone excitation space

This procedure radically transforms the color space in two ways (diagram, right): (1) it uncouples the "red" and "violet" ends of the spectrum locus, which were previously joined at the origin (zero values) because they are both very "dark" hues (2) it projects the spectrum locus onto the plane surface of an equilateral triangle (blue) whose corners are located at the three maximum values for the three cones. The three dimensional spectrum locus has been flattened into two dimensions, and the original banana shaped color space has been transformed into something resembling a right triangle. If we define the L, M and S dimensions as the response each cone relative to its maximum response, then a line from the origin to the white point (wp) forms an achromatic gray scale.

It is customary to display this transformed spectrum locus so that the triangle plane is perpendicular to view. In this orientation it forms a trilinear mixing triangle or a Maxwell triangle, after the 19th century English physicist James Clerk Maxwell who first used it.

The trilinear mixing triangle does not represent differences in brightness or lightness between colors, only differences in chromaticity (hue and hue purity). Chromaticity is the "color" in color, separate from its lightness or brightness. For this reason, the area within the mixing triangle that is enclosed by the bowed spectrum locus (and a line connecting the extreme short and long wavelengths) is called a chromaticity diagram.

This figure offers many fundamental insights into the geometry of color and is worth patient study.

a trilinear mixing triangle and chromaticity diagram

any possible combination of three cone outputs can be represented as a unique point within the triangle the range of physically possible colors is contained inside the spectrum locus

• Each corner of the triangle represents the color that would be perceived by the maximum excitation of a single L, M or S cone without any excitation in the other two cones. The sides of the triangle represent shared excitations between just two cone types, with no contribution from the third. Any location inside the triangle represents a color that results from the stimulation of all three types of cone.

• The mixture proportions for each cone are shown along the triangle sides. By definition every trilinear mixture must sum to 100%, so a mixture of 50% S and 38% L (color a) must contain 12% M (by subtraction: 100㫊㪾 = 12). Therefore just two chromaticity values uniquely specify every color in a chromaticity diagram.

• The point where all three primaries contribute in proportions equal to their perceptual weight is the white point. The white point changes its location within the chromaticity diagram depending on how the three cone outputs are weighted the diagram shows them weighted proportional to the areas under the normalized cone fundamentals (44%, 37%, 19%).

• The chromaticities of monochromatic (single wavelength) lights define the spectrum locus, the trace of the most intense colors physically possible. The line of "red" and "blue" mixtures between 400 nm and 700 nm, which includes magenta and purple, is called the purple line.

• As explained above, spectral hues at wavelengths above

525 nm are defined only by the relative proportion of L and M outputs they lie on the straight line L/M base of the equilateral triangle. Hues below 525 nm are produced by a roughly constant ratio of L and M outputs, so are distinguished only by the relative percentage of S cone excitation.

• All mixtures of two real colors of light (such as colors a and b) define a straight mixing line across the chromaticity space. All colors produced by the mixture of those two colors of light must lie along the mixing line.

• The hue purity or chroma of a color is defined as the length of the mixing line between the color and the white point. It is obvious that monochromatic hues do not have equal hue purity: spectral "yellow" appears rather pale or whitish because it is close to the white point, and spectral "violet" has the highest chroma because it is far away.

• All mixtures outside the spectrum locus and purple line are cone proportions that cannot be produced by any physical light or surface. They are physically impossible or unrealizable colors. This gray area shows that about half of the unique combinations of cone outputs cannot be produced by any physical stimulus. It also shows that these colors — especially "green" primary — would appear more saturated than any spectral light.

This has nothing to do with the purity of light: it is due to the overlap in cone fundamentals across the spectrum (especially between the L and M cones), and to the random, side by side mixture of L, M and S cones within the retina. As a result any light that stimulates one cone also stimulates one or both remaining types of cones. We are physiologically prevented from seeing a "pure" cone output, and therefore we never see a pure primary color.

• Very large color differences can be produced by very small differences in cone outputs. For example, the change from "white" to pure "yellow" occurs with a change in L and S cone outputs of less than 20% all green mixtures are produced by changes of less than 30% in the L and M cone outputs.

• The chromaticity distance between unique "green" (

450 nm) is extremely large and represents almost the entire range of S cone outputs. For this reason the appearance and measurement of blue hues are sensitive to the location of the white point and are variable across different color models.

color space defined as cone excitations as a proportion of total cone excitations (brightness)

• A color appearance does not reveal the cone proportions that created it. In every green color the M cone outputs are less than 60% of the total all possible colors contain at least 10% of the L outputs most of the possible cone output combinations result in various flavors of blue.

• Finally, chromaticity diagrams are highly sensitive to assumptions made about how cones combine or are weighted in perception, or how the dimensions are rotated to present the chromaticity diagram to view. The two examples at right show the CIE Yxy chromaticity diagram (top) that has long been a standard in colorimetry but does not describe color differences accurately (for example, it makes "green" the most saturated spectral hue and gives it the largest perceptual area) and the CIE 1976 UCS chromaticity diagram (bottom), in which the measurement dimensions have been manipulated to represent, as accurately as feasible in a two dimensional diagram, the relative saturation of spectral hues (as their distance from the white point) and the perceptual difference between two similar colors as the chromaticity distance between them in the diagram.

10° (wide field) and 2° (foveal) chromaticity diagrams

The diagram above shows the change in the chromaticity diagram that occurs if cone fundamentals are weighted so that the area under each curve represents the cone populations for a 10° retinal area (which includes 6% or more S cones) or a 2° retinal area (which includes less than 1% S cones). The foveal weighting produces a substantial reduction in the "blue" response range and shifts the white point almost to the L–M border.

When using a chromaticity diagram, keep in mind that it has mathematically and rather mechanically removed the perceptual effect of brightness. Because the luminance of a "red" or "violet" monochromatic or single wavelength light appears quite dim in comparison to a "green" light, when all lights are the same radiant intensity, the "red" and "violet" lights would have to be substantially increased in power to produce a perceptual match to the chromaticity diagram. A chromaticity diagram therefore does not accurately describe, for example, the perception of relative color intensity in a solar spectrum, where the visible wavelengths have roughly similar radiance.

The interactive tutorial on color perception hosted by the Brown University Computer Science Department includes a java applet that models the additive mixture of three "primary" colors.

Constraints on color vision

To conclude this page, let's consider how our visual capabilities are adapted to perception of radiant energy in the world. By comparing our visual capabilities to those of other animals, we can understand how we benefit from three cones, rather than two or four, and why our cones are tuned to specific wavelengths and not others.

variety in chromaticity diagrams

(top) CIE Yxy diagram, with the xyz "primaries" rotated to match a maxwell triangle (bottom) CIE UCS diagram, with a primary triangle imputed

Standard Assumptions . An assumption made in most studies of animal vision is that photopigment absorption curves conform to a common shape, for example Dartnall's standard shape (right), which is plotted around the pigment's peak value on a wavenumber scale. This common shape arises from the backbone opsin molecule structure.

Variations in the opsin amino acid sequence only shift the wavenumber of peak sensitivity up or down the spectrum this does not change the basic shape of the curve, though it becomes slightly broader in longer wavelengths. (We have already seen this template similarity in the human photopigment curves.) It is also assumed that each class of photoreceptor contains only one kind of photopigment, and that the principle of univariance describes the photopigment's response to light.

These three assumptions allow a basic understanding of animal visual systems without painstaking measurement of photopigment or cone response curves. Dartnall's standard shape does not adequately describe human color vision, especially the L photopigment absorptance, but idealized curves are adequate to illustrate the important constraints on color vision.

Monochromatic Vision . The rudimentary form of vision requires a single receptor cell (V). For maximum sensitivity this receptor should respond to wavelengths somewhere in the 300 nm to 1000 nm area of the spectrum where solar radiance at the earth's surface is most intense.

This kind of visual system, which is common in lower vertebrates, codes light along a single luminosity dimension that can only distinguish light from dark. Monochromatic vision can include a mechanism for light adaptation that allows the eye to function across large changes in overall illumination, and it can detect movement, shapes, surface textures and depth. But it cannot easily distinguish between the emitted brightness of lights and the reflected lightness of surfaces, or objects from near or similarly reflecting backgrounds. It also cannot perceive color: changes in hue — from "green" toward either "red" or "blue" — will appear as luminance changes from light to dark. Nevertheless, all vertebrates have at least this basic visual capability, which suggests that luminance variations are the dominant visual information available from the environment.

Humans experience monochromatic vision at night, under scotopic or dark adapted vision when only a single type of photoreceptor (the rods) is active, and under monochromatic illumination such as the red light lamps used by astronomers.

a single cone visual system

response curve of human rods with maximum sensitivity at 505 nm

As only one receptor is involved, the key constraint has to do with the receptor sensitivity peak and breadth within the span of solar radiation. For purely technical reasons the peak solar radiation seems to shift in a range between roughly 500 nm to 900 nm, depending on whether the radiation is summed within wavelength or frequency intervals, and is measured as energy or photon counts and the noon sunlight curve is rather flat throughout this range. So the solar peak is a poor criterion for comparison. Instead we can consider a "window" of atmospheric transparency or minimum light filtering as measured at the earth's surface, which provides a stable frame of reference.

expressed on a wavelength scale at a peak value of 505 nm

The major causes of light absorption or scattering in our atmosphere are air molecules (including the ozone layer), dust or smoke, and water vapor. As the diagrams at right show, there is an especially close correspondence between the human visual span and the wavelengths of minimum water absorptance, including liquid and water vapor — and the large bead of mostly water, the vitreous humor, that inflates the eye and sits between the pupil and retina. Human light sensitivity is located on the "uphill" side of this lowest point, away from UV radiation and toward the infrared side of the light window. All vertebrates have inherited visual pigments that evolved in fishes, which may explain why our pigments are tuned to these wavelengths.

A second possible constraint is the range of chemical variation in photopigments, for example as expressed in all known animal photopigments. The figure below shows the wavelengths of maximum sensitivity for the four human photopigments in relation to animal photopigments with the lowest and highest peak sensitivities — from 350 nm (in some birds and insects) to 630 nm (in some fish). This puts the outer boundaries of animal light sensitivity between 300 nm to 800 nm. Human vision is in the middle of the range that other animals have found useful.

human visual pigments within span of known animal visual pigments

A third constraint has to do with the span of visual pigment sensitivity, because the sensitivity curves must overlap to create the "triangulation" of color. For Dartnall's standard shape at 50% absorptance, this implies a spacing (peak to peak) of roughly 100 nm. If we include the "tail" responses at either end of the spectrum, a three cone system could cover a wavelength span of about 400 nm.

The fourth and last constraint is more subtle but equally important: avoiding useless or harmful radiation.

• At wavelengths below 500 nm (near UV), electromagnetic energy becomes potent enough to destroy photopigment molecules and, within a decade or so, to yellow the eye's lens. Many birds and insects have receptors sensitive to UV wavelengths, but these animals have relatively short life spans and die before UV damage becomes significant. Large mammals, in contrast, live longer and accumulate a greater exposure to UV radiation, so their eyes must adapt to filter out or compensate for the damaging effects of UV light. In humans these adaptations include the continual regeneration of receptor cells and the prereceptoral filtering of UV light by the lens and macular pigment.

• At the other extreme, wavelengths above 800 nm are heat, which is less informative about daylight object attributes: it is dimmer than shorter wavelengths, is heavily absorbed by liquid water or water vapor, and lacks the nuanced spectral variations that can be interpreted as color. In mammals, the visual system's heat sensitivity would have to be shielded from the animal's own body heat at wavelengths longer than 1400 nm, and the very long photopigment molecules (or artificial dyes) necessary to absorb radiation in wavelengths between 800 nm to 1400 nm are known to oxidize or decompose readily. These complications make long wavelength energy more trouble than it is worth.

On balance, then, it seems that animal vision is limited at the wavelength extremes as much as it is anchored by a radiance peak or an inherited range of photopigment possibilities.

Dichromatic Vision . How do animals utilize this limited span of light? Many mammals are equipped with a two cone photopic visual system: one cone shifted into the "yellow green" wavelengths, the other shifted toward the "blue" end of the spectrum, with substantial overlap between the two sensitivity curves in the "green" middle. These Y and B cones make up what John Mollon calls the old color system. They enable the eye to distinguish between light radiating in the long versus short wavelengths.

light within the absorptance spectrum of water

human luminous efficiency
and the transmission curve of pure water, by depth

A two cone system can distinguish differences in wavelength patterns from total luminance, which means hue can be perceived separate from lightness. The efficient way to do this is to combine the two cone responses to determine a brightness quantity, but to difference or subtract the cone outputs to define a hue contrast (diagram, right). That is, the sum Y+B creates a "supercone" that has the same univariant response to hue as the V cone, while the difference Y–B contrasts stimulation at opposite ends of the light spectrum.

The difference output is called an opponent coding of the separate cone outputs. It is difficult to overstate the importance of opponent responses in color vision, beginning with the opponent dimensions important to hue sensation but including many other contrast mechanisms discussed in a later page.

Primates have retained the backbone of this mammalian vision as the y/b opponent function (diagram, below). This opponent function is created from the outputs of two separate cone systems, which requires a bimodal shape in the overall visual response, with a sensitivity peak in the short and long wavelengths (diagram, right). The "white" point of this function, where the Y (L+M) and B outputs are equal, is located around 485 to 495 nm ("cyan"). Like the "yellow" point that marks equal outputs from L and M cone sensitivity curves, this cyan "white" marks the point of equal outputs from the Y and B color receptors. Again like "yellow", "cyan" light has a relatively low hue purity and tinting strength compared to the "blue" or "red" spectrum extremes. Unlike "yellow", however, "cyan" is visually dimmer than the "yellow green" response peak of the V cone because the S cones do not significantly contribute to luminance perception.

the human y/b opponent (contrast) function with peak sensitivities at 445 and 560 nm, a white point at 495 nm and macular masking of "blue" response

This contrast between short and long wavelength light persists in human color vision as the warm/cool color contrast. This is the most general chromatic contrast in color perception, and it appears to have a strong influence on the development and use of color terms in almost all languages.

What determines the placement of the two peak sensitivities? J. Lythgoe and J. Partridge demonstrated that a two cone visual system adapted to the green leaves, twigs and brown soil litter of forest habitats gets the greatest chromatic contrast when the peak sensitivities are located between 420 nm to 450 nm (B) and 510 nm to 580 nm (Y). These ranges include the peak sensitivities of the primate y/b opponent function shown above, and primates evolved in forest habitats.

Metamerism & Colorblindness. There are two important limitations to a visual system based on two partly overlapping sensitivity curves. The first is that very different distributions of light wavelengths will be perceived as the same color, and this occurs among both chromatic ("colored") and achromatic ("white") color sensations. This problem is called metamerism. Dissimilar spectral distributions that produce the same color sensation are called metamers. A two cone system is especially susceptible to metameric confusions.

Metamers occur whenever the two cones are stimulated in the same relative proportions. The most glaring examples include colors perceived as white, when the cone stimulations are 50:50. As the spectral reflectance curves below illustrate, this can occur in reflectance patterns that appear as dissimilar as gray, green or magenta in trichromatic vision.

metamers for white (or gray) in a two cone visual system

spectral reflectance curves for gray (top), magenta (middle) or green (bottom) would appear indistinguishable in a two cone visual system

Parallel problems occur whenever surface color differences produce similar proportional responses in the cones — for example, between greens and reds, or purples and blues — or when the illumination changes color without changing proportional cone responses. These greatly expand the possible metameric confusions.

These problems characterize human dichromatic vision or colorblindness in which typically either the L or M cones are absent. These folks see an unusually large number of material metamers in the everyday world, and large color differences often appear to them quite subtle. Dichromats are easily confused by yellows and browns, or by blue greens and purples, especially across surfaces of similar lightness. Yellow loses its characteristic lightness, and they commonly see an achromatic or "white" color in the spectrum located at the "cyan" balance point between 490 nm to 500 nm.

The second limitation in a two cone system is that perception of saturation or hue intensity cannot be easily disentangled from lightness. There are only two possible combinations of two cone outputs: adding them together to define brightness, or contrasting them to define hue. There is no third combination to uniquely define saturation.

Despite that, some studies show that human dichromats do see saturation differences, especially at the spectrum ends, but with only half the acuity of trichromats. To do this, the visual system probably uses lightness contrast to estimate chroma, by comparing the lightness of a surface to the lightness of the brightest surface in view. A process called lightness induction performs this contrast judgment in human trichromatic vision. This is how we can see the difference in a achromatic surface between a dark (gray) color and a dimly lit (white) color in red to yellow green surface colors, where S cone response can be effectively zero, the same contrast causes surfaces to appear dark (brown) rather than dimly lit (orange). (This is explained further in the section on unsaturated color zones.)

separating luminance from hue responses in two cones

defined as the sum and difference
of the L and M outputs

bimodal human visual response

based on the Smith & Pokorny
normalized cone fundamentals

A two cone system seems optimally defined to provide a new function — chromatic adaptation to the shifts in daylight phases of natural light, from the slightly blue, cool light of noon to the ruddy, warm light of sunset. These changes in lighting significantly shift the apparent hue of surface colors: around sunset a white surface will appear yellow or orange. In human trichromats and dichromats alike, the separate cone sensitivities can be adjusted to increase the B response to compensate for the reduced "blue" light, and decrease the Y (trichromatic L+M) response to compensate for the increased "red" light (right), which should restore the white point to its accustomed location.

However, color perception in dichromats is significantly affected by luminance contrast — dichromats perceive colors of light to grow redder as they get dimmer. This goes in the opposite direction to a compensatory increased sensitivity of the Y receptors, and probably complicates the perception of warm surface colors across changes in the intensity or chromaticity of the illumination.

Trichromatic Vision . Finally, all primates — monkeys, apes and humans — acquired a second set of contrasting receptor cells: the L and M cones, which evolved from a genetic alteration in the mammalian Y cone.

There is only a small difference between the L and M cones in molecular structure and overlapping spectral absorptance curves, but it is enough to create what Mollon calls the new color system. This defines hue contrasts between middle wavelength ("green") light and long wavelength ("red") light. These cells are also linked in a contrast or opponent relationship that defines the r/g opponent function.

a second two cone visual system

the human r/g opponent (contrast) function with peak sensitivities at 530 and 610 nm, a white point at 575 nm and macular masking of "blue" response

The main benefit of trichromacy is that it creates a unique combination of cone responses for each spectral wavelength and unambiguous hue perception. This enhances object recognition when surfaces are similar in lightness or are randomly shadowed, as under foliage. It also substantially improves the ability to separate the color of light from the color of surfaces, because illuminant metamerism is also reduced color constancy is greatly improved.

spectral contrast between
direct sunlight and indirect
(blue sky) light

(from Wyszecki & Stiles, 1982)

A second important trichromatic benefit is that it reduces metameric colors to various flavors of gray around the white point and into dull blues and purples (diagram at right). As a result the number of physical metameric matches is radically reduced, as anyone who has tried to match household paint colors has found out. In fact, excluding trivial variations, there are no possible metameric emittance profiles under an equal energy "white" light source for any color at moderate to high saturation. In effect, saturation is a kind of perceptual confidence that hue accurately symbolizes the spectral composition of a color.

Near grays, besides generating a very large number of metameric surface colors, are also most susceptible to color change by subtractive mixture with the light source color — change the emittance profile of the light, and the surface color changes as well. Metamers that appear identical under noon daylight often scatter into visually different colors under late afternoon light (or interior incandescent light) as the white point shifts from blue to yellow, and this chromaticity scatter is typically elongated in the red to green direction discriminated by the r/g contrast (diagram at right). This is a particular problem for automotive manufacturers, who must choose different plastic, fabric and paint materials to get a color match and identical color changes across different phases of natural light.

chromaticity of metameric colors in light mixtures

and the scatter of illuminant
shifted achromatic metamers (from Wyszecki & Stiles, 1982)

What explains the location of the r/g opponent balance or "white" point at around 575 nm ("yellow")? This is the approximate spectral direction of both the chromaticity shifts in natural daylight and the approximate hue of the prereceptoral filters. As a result, changes in the angle of sunlight from morning to late afternoon, and the gradual darkening of the lens across age, produce no perceptible change the r/g contrast and hence no color change that cannot be handled by adaptation of the y/b balance. By the time sunlight acquires a golden or deep yellow appearance it has begun to shift off the y/b axis toward red: the r/g balance then registers a change and surface colors show the tint of the light.

Another intriguing explanation for this "yellow" balance point appears in the reflectance curves of 1270 color samples from the Munsell Book of Color. The curves at right show 10 colors of identical saturation and value, equally spaced around the hue circle. All the curves seem to inflect in a small region centered on 575 nm which means comparative information about the reflectance curves is minimal at that point.

This r/g balance point is insensitive to chromaticity for the same reason that a lever is not imbalanced by weight placed over the fulcrum. Placing the "yellow" balance point at an area of minimal reflectance information makes color vision maximally sensitive to relative "green" and "red" changes in surface colors, and permits hue resolution into the "red" end of the spectrum, where the S cones provide no response, as the relative proportion of L and M response.

Why Not 4 or More Cones? The final query is: why don't we have four or more cones? Why stop with only three?

reflectance curves for standard Munsell hue samples at constant lightness and saturation

We can exclude the possibility that the obstacle is evolution of new photopigments. Molecular genetics has identified 10 variations in the human L and M photopigments, which create two clusters of similar peak responses around 530 nm and 555 nm (right). Males are also split roughly 50/50 by a single amino acid polymorphism (serine for alanine) that shifts the peak sensitivity in 5 of these variants, including the normal male L photopigment, by about 4 nm. Finally, it is genetically possible for about 50% of females to express a fourth "red" photopigment and some individuals carry genes for only one type of L and M photopigment while others carry multiple (different) versions. These many combinations can significantly affect trichromatic responses or cause colorblindness. However it is still assumed that cones contain only one kind of photopigment or that cones with chemically similar photopigments output to common nerve pathways. Thus, cones and nerve pathways are the fundamental units of trichromatic vision, not the photopigments.

There are twelve unique ways to sum or contrast three cone outputs to define hue: our vision uses six contrasts, plus a single luminance sum. This requires a unique nerve pathway for seven different signals similar outputs in a four cone system would require at least 15 contrast and luminance pathways.

There are roughly one million nerve tracts from the eye to the brain, and each tract carries information from roughly six cones and 100 rods. This suggests nerve pathways are a resource that must be conserved. A four channel chromatic system would, at minimum, double this load, grossly decreasing the granularity in the retinal information or requiring an increase in neural processing in the retina and bandwidth in the optic nerve.

Evolution could arrive at a more complex visual system, but it would require modifying a visual cortex specialized to receive and interpret the three cone outputs adding a fourth cone would mean reengineering the brain as well. These costs far outweigh any adaptive advantage that four cones could produce.

Why 3 Badly Spaced Cones? Evolutionary considerations lead to a more basic question: is color really what our visual system is adapted to perceive? From a design perspective, the most interesting question is not why we have three rather than four cones, but why the three cone fundamentals are so unevenly spaced along the spectrum and unequally represented (63% L, 31% M and 6% S) in the retina. Our acuity to differences in color (hue and saturation) would substantially improve, and our visible spectrum would significantly expand, if the cone sensitivity curves were more evenly spaced and the retinal cone proportions were better balanced.

multiple L and M photopigments identified in the human retina

curves from Backhaus, Kliegl & Werner (1998) peak wavelengths from Merbs & Nathans (1992) lines connect serine/alanine polymorphisms

The answer appears in an important optical problem that arises when a large eye is made sensitive to a wide span of the spectrum: chromatic aberration. When light passes through a lens, "blue" wavelengths are refracted (bent) more strongly than "red" wavelengths, causing the "blue" image to focus at a point in front of the "red" image (diagram at right). This causes overlapping, fuzzy colored fringes in a focused image, especially around the edges of intricate light and dark patterns, such as branches and leaves seen against the sky (right). Chromatic aberration seriously degrades visual acuity, as does a related optical problem, spherical aberration, caused by the somewhat round exterior of the cornea.

Manufactured optical instruments solve this problem with a sandwich of lenses, the simplest consisting of a convex and concave doublet, one cancelling the chromatic aberration of the other. Animal lenses are always convex (bulging), and an achromatic doublet requires a rather long focal length (proportionally much longer than the diameter of an eye), so the doublet solution is not feasible in a large eye. However, the "red" wavelengths require less optical bending to come into focus, which means "yellow" light requires less precise optics, especially in bright daylight, when the aperture of the eye is small relative to its focal length and the eye essentially becomes a "pinhole" camera.

Evolution tackled chromatic aberration not with complex lenses but with several new adaptations, some of them unique, that substantially reduce the effects of "blue" and "violet" wavelengths in the fovea and the eye as a whole:

• A cornea that is more spherical at its center than around its circumference, reducing spherical aberration at the edges.

• A cornea, lens and eye diameter (focal length) that produce the most precise image in the "yellow" wavelengths, where optical demands are less extreme.

• The prereceptoral filtering in the lens and macular pigment, and in the yellow tint of bleached photopigment, which combine to filter out more than half the "blue" and "violet" light below 470 nm.

• A strong directional sensitivity to light incident on the fovea (the Stiles Crawford effect), created by the uniform alignment of photopigment in the outer segment discs this causes the photopigment to react less strongly to light coming from the side, which is predominantly scattered "blue" wavelengths.

• An overwhelming population (roughly 94%) of L and M receptors, and a close spacing between their sensitivity peaks, which limits the requirement for precise focusing to the "yellow green" wavelengths.

• A sparse representation by S receptors in the eye (6% of total), which substantially reduces their contribution to spatial contrast, and the nearly complete elimination of S cones from the fovea, where sharp focus is critical.

• Separation of luminosity (contrast) information and color information into separate neural channels, to minimize the impact of color on contrast quality.

• Neural filtering of signals by the L and M cones within in the fovea that suppresses color information in detailed, contrasty textures.

• Neural filtering higher in the visual system to eliminate chromatic aberration from conscious visual experience, and which can (after a period of adjustment) even eliminate blurring that is artificially induced by distorting prisms or eyeglasses.

These many adaptations enable the fovea to be extremely effective at edge discrimination, even in strong contrasts of light and dark they also enhance image clarity when the eyes are stereoscopically combined, greatly improving depth perception.

Minimizing chromatic aberration has profound benefits for modern humans, as it makes possible the crisp pattern recognition we require to read text, or the acute depth perception necessary to aim weapons or catch a prey. But what about early primates? They were small bodied tree dwellers, who had to "read" the outlines of tree limbs intertwined in space and judge how far to leap to "catch" the branch of escape or the bough of dangling food. You can see this capability in the amazing fearlessness with which all primates scramble and leap across large distances between tree limbs high above the ground, where a single misjudgment can cause crippling injury or death. Those are stakes that evolution can latch onto.

There is a minor downside to a strong selective pressure toward visual acuity and lack of selective pressure toward color discrimination: colorblindness. Because the genes for both the L and M photopigments are located next to each other on the X chromosome, the lack of duplicate opsin genes in XY males causes frequent variations in the L and M photopigments that make them chemically similar or identical — and can make the 25 nm separation between them disappear. The result is various forms of dichromacy that affect about 5% of the population, nearly all of them males.

This "red green colorblindness" is caused either by missing L cones (protanopia, in 2% of males) or M cones (deuteranopia, in 6% of males). (Lack of S cones or tritanopia occurs in less than 0.01% of the population.) These conditions can be diagnosed using very simple perceptual tests, such as the Ishihara color disks. Remarkably, many men do not discover (are not told) that they are colorblind until their teenage years, which strongly suggests yet again that hue discrimination is not essential for most life tasks. (For more on color vision deficiencies, see this page.)

chromatic aberration in a simple lens

"red" and "green" light are focused
far behind "blue" light

chromatic aberration and life among the trees

A currently popular evolutionary explanation for L–M discrimination is that it assists the detection of red fruit among green foliage, the "cherries in the leaves" hypothesis (photos, right). But it can also be interpreted as a chromatic contrast designed to minimize the effects of chromatic aberration around a yellow balance point, so that red and green darken equally around the yellow focus.

Edge detection and depth perception based on patterns in light and dark has taken evolutionary priority over any problems involving hue discrimination, and it is on these visual stimuli that culture, social consensus and communication really depend. A telling illustration comes in a map of the busiest areas of the human brain: the integrating connections between the visual and language areas.

a map of the busiest areas of the human brain

after Hagmann P, Cammoun L, Gigandet X, Meuli R, Honey CJ, et al. (2008)

We find this edge and pattern imperative carried into our art and documents — etchings, woodcuts, monochrome wash and charcoal or pen drawings, printed text, fabric patterns and vegetable weaves — all art that appeals entirely to the eye's monochrome perception of pattern and line. Nor is this a matter of simple drawings from simple tools. Even with all our printing and reproduction technologies, text and engineering drawings still exclude varied colors to increase the legibility and interpretability of the document.

As we say: "to see" means "to understand", and to understand means to see clearly, not colorfully.

The fundamental reference for all things luminous, chromatic and colorimetric is Color Science: Concepts and Methods, Quantitative Data and Formulæ (2nd edition) by Günter Wyszecki and W.S. Stiles (John Wiley: 1982), nearly encyclopedic but showing its age. The best overview of color vision that I have seen — compact, informative and up to date, though emphasizing basic perceptual processes and colorimetry — is The Science of Color (2nd edition) edited by Steven Shevell (Optical Society of America, 2003). I'm especially partial to the overview of experimental methods and evidence in Human Color Vision (2nd edition) by Peter Kaiser and Robert Boynton (Optical Society of America, 1994). Peter Kaiser also has authored a lucid web site on The Joys of Vision. Color Vision: Perspectives from Different Disciplines (de Grutyer, 1998), edited by Werner Backhaus, Reinhold Kliegl & John Werner contains a variety of interesting chapters, including a study of Monet's aging eyes.

The premier review of color and color vision as it relates to printing, photography and analog video is The Reproduction of Colour (6th ed.) by R.W.G. Hunt (John Wiley: 2004). Introduction to Color Imaging Science by Hsien-Che Lee (Cambridge University Press: 2005) is actually an in depth discussion of color topics relevant to color imaging technologies, including digital imaging. A text with similar topical coverage as Hunt but less formal theory is Billmeyer and Saltzman's Principles of Color Technology (3rd ed.) by Roy S. Berns (Wiley Interscience: 2000).

Seeing the Light: Optics in Nature, Photography, Color, Vision and Holography by David Falk, Dieter Brill & David Stork (John Wiley: 1986) is an eclectic but very pragmatic and well illustrated traversal of almost every known color phenomenon relevant to modern imaging technologies. Color for Science, Art and Technology edited by Kurt Nassau (North Holland: 1997) is a miscellany of rather unusual chapters on color, such as "The Fifteen Causes of Color", "Color in Abstract Painting", "Organic and Inorganic Pigments", and "The Biological and Therapeutic Effects of Light".

Enter The Eye

The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to. websites, books, games, software, video, audio, other digital-obscura and ideas. as an open directory allows us to serve large chunks of data back to the community that helps find and provide it.

We currently host various large scale data-sets amounting to millions of files in 140TB of data and you can view our real-time bandwidth statistics here.

Living without the knowledge of our past history, origin and culture is like a tree without roots. The Internet is a worldwide platform for sharing information. It is a community of common interests.

Our mission at The Eye is to preserve pieces of digital history. We are digital librarians.

The-Eye was born in April 2017, starting out as a side project providing a public resource for various collections much smaller than we deal with today.

As of late 2020 we're seeing 300 million+ requests amounting to over 1000TB+ of content served to millions of unique visitors each month

We pay $650/month to cover our base operating costs (servers, bandwidth, etc.) and are entirely community funded.

The-Eye has an established and active community on Discord.

We frequently post news, sneak peaks, exclusive content and actively take part in our community.

If you're interested in knowing more about The-Eye, have any suggestions, or just want to hang out then you've found a new home.

Fully implanted

“It has potential as a neuroprosthetic that can be fully implanted,” Zaghloul told New Scientist. The chip could be embedded directly into the eye and connected to the nerves that carry signals to the brain’s visual cortex.

To make the chip, the team first created a model of how light-sensitive neurons and other nerve cells in the retina connect to process light. They made a silicon version using manufacturing techniques already employed in the computer chip industry.

Their chip measures 3.5 x 3.3 millimetres and contains 5760 silicon phototransistors, which take the place of light-sensitive neurons in a living retina. These are connected up to 3600 transistors, which mimic the nerve cells that process light information and pass it on to the brain for higher processing. There are 13 different types of transistor, each with slightly different performance, mimicking different types of actual nerve cells.

“It does a good job with some of the functions a real retina performs,” says Zaghloul. For example, the retina chip is able to automatically adjust to variations in light intensity and contrast. More impressively, says Patrick Deganeer, a neurobionics expert at Imperial College London, UK, it also deals with movement in the same way as a living retina.

What Is The Resolution Of The Human Eye In Megapixels?

What is the resolution of the human eye in megapixels? originally appeared on Quora: the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Dave Haynie, Engineer, Musician, Photo/Videographer, on Quora:

What is the resolution of the human eye in megapixels? Well, it wouldn't directly match a real-world camera . but read on.

On most digital cameras, you have orthogonal pixels: they're in the same distribution across the sensor (in fact, a nearly perfect grid), and there's a filter (usually the "Bayer" filter, named after Bryce Bayer, the scientist who came up with the usual color array) that delivers red, green, and blue pixels.

So, for the eye, imagine a sensor with a huge number of pixels, about 120 million. There's a higher density of pixels in the center of the sensor, and only about 6 million of those sensors are filtered to enable color sensitivity. And of course, only about 100,000 sense for blue! Oh, and by the way, this sensor isn't made flat, but in fact, semi-spherical, so that a very simple lens can be used without distortions real camera lenses have to project onto a flat surface, which is less natural given the spherical nature of a simple lens (in fact, better lenses usually contain a few aspherical elements).

This is about 22mm diagonal on the average, just a bit larger than a micro four-thirds sensor, but the spherical nature means the surface area is around 1100mm^2, a bit larger than a full-frame 35mm camera sensor. The highest pixel resolution on a 35mm sensor is on the Canon 5Ds, which stuffs 50.6Mpixels into about 860mm^2.

So that's the hardware. But that's not the limiting factor on effective resolution. The eye seems to see "continuously," but it's cyclical, there's kind of a frame rate that's really fast, but that's not the important one. The eye is in constant motion from ocular microtremors that occur at around 70-110Hz. Your brain is constantly integrating the output of your eye as it's moving around into the image you actually perceive, and the result is that, unless something's moving too fast, you get an effective resolution boost from 120MP to something more like 480MP as the image is constructed from multiple samples.

Which makes perfect sense—our brains can do this kind of problem as a parallel processor with performance comparable to the fastest supercomputers we have today. When we perceive an image, there's this low-level image processing, plus specialized processes that work on higher level abstractions. For example, we humans are really good at recognizing horizontal and vertical lines, while our friendly frog neighbors have specialized processing in their relatively simple brains looking for a small object flying across the visual field: that fly he just ate. We also do constant pattern matching of what we see back to our memories of things. So we don't just see an object, we instantly recognize an object and call up a whole library of information on that thing we just saw.

Another interesting aspect of our in-brain image processing is that we don't demand any particular resolution. As our eyes age and we can't see as well, our effective resolution drops, and yet, we adapt. In a relatively short term, we adapt to what the eye can actually see, and you can experience this at home. If you're old enough to have spent lots of time in front of Standard Definition television, you have already experienced this. Your brain adapted to the fairly terrible quality of NTSC television (or the slightly less terrible but still bad quality of PAL television), and then perhaps jumped to VHS, which was even worse than what you could get via broadcast. When digital started, between VideoCD and early DVRs like the TiVo, the quality was really terrible, but if you watched lots of it, you stopped noticing the quality over time if you didn't dwell on it. An HDTV viewer of today, going back to those old media, will be really disappointed, and mostly because their brain moved on to the better video experience and dropped those bad-TV adaptations over time.

Back to the multi-sampled image for a second cameras do this. In low light, many cameras today have the ability to average several different photos on the fly, which boosts the signal and cuts down on noise your brain does this, too, in the dark. We're even doing the "microtremor" thing in cameras. The recent Olympus OM-D E-M5 Mark II has a "hires" mode that takes eight shots with 1/2 pixel adjustment, to deliver what's essentially two 16MP images in full RGB (because full pixel steps ensure every pixel is sampled at R, G, B, G), one offset by 1/2 pixel from the other. Interpolating these interstitial images as a normal pixel grid delivers 64MP, but the effective resolution is more like 40MP, still a big jump up from 16MP. Hasselblad showed a similar thing in 2013 that delivered a 200MP capture, and Pentax is also releasing a camera with something like this built-in.

We're doing simple versions of the higher-level brain functions, too, in our cameras. All kinds of current-model cameras can do face recognition and tracking, follow-focus, etc. They're nowhere near as good at it as our eye/brain combination, but they do ok for such weak hardware.

They're only few hundred million years late.

This question originally appeared on Quora. Ask a question, get a great answer. Learn from experts and access insider knowledge. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Camera and the eye (filtering information) - Biology

When we see something, what we are seeing is actually reflected light. Light rays bounce off of objects and into our eyes.

Eyes are amazing and complex organs. In order for us to see, light enters our eyes through the black spot in the middle which is really a hole in the eye called the pupil. The pupil can change sizes with the help of the colored part around it, a muscle called the iris. By opening and closing the pupil, the iris can control the amount of light that enters the eye. If the light is too bright, the pupil will shrink to let in less light and protect the eye. If it's dark, the iris will open the pupil up so more light can get into the eye.

Once the light is in our eye it passes through fluids and lands on the retina at the back of the eye. The retina turns the light rays into signals that our brain can understand. The retina uses light sensitive cells called rods and cones to see. The rods are extra sensitive to light and help us to see when it's dark. The cones help us to see color. There are three types of cones each helping us to see a different color of light: red, green, and blue.

In order for the light to be focused on the retina, our eyes have a lens. The brain sends feedback signals to the muscles around the lens to tell it how to focus the light. Just like the way a camera or microscope works, when we adjust the lens we can bring the image into focus. When the lens and muscles can't quite focus the light just right, we end up needing glasses or contacts to help our eyes out.

The rods and cones of the retina change light into electrical signals for our brain. The optic nerve takes these signals to the brain. The brain also helps to control the eye to help it focus and to control where you are looking. Both eyes move together with speed and precision to allow us to see with the help of the brain.

With two eyeballs our brain gets two slightly different pictures from different angles. Although we only "see" one image, the brain uses these two images to give us information on how far away something is. This is called depth perception.

HL was funded by the Lundbeck Foundation, the Novo Nordisk Foundation, the Velux Foundations, and Helga og Peter Kornings Fond. CW was funded by Novo Nordisk Foundation.


Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University, 304 Weill Hall, Ithaca, NY, 14853-7202, USA

Henrik Lauridsen, Selina Gonzales & Jonathan T. Butcher

Department of Clinical Medicine, Aarhus University, Palle Juul-Jensens Boulevard 99, 8200, Aarhus N, Denmark

Henrik Lauridsen & Michael Pedersen

California State University, 333 S Twin Oaks Valley Rd, San Marcos, CA, 92096, USA

Cornell Lab of Ornithology, Cornell University, 159 Sapsucker Woods Road, Ithaca, NY, 14850, USA

Daniela Hedwig & Peter H. Wrege

Center for Zoo and Wild Animal Health, Copenhagen Zoo, Roskildevej 32, 2000, Frederiksberg, Denmark

Kathryn L. Perrin, Catherine J. A. Williams & Mads F. Bertelsen

Department of Veterinary Clinical Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Dyrlægevej 6, 1870, Frederiksberg C, Denmark


  1. Brarr

    I believe that you are wrong. I'm sure. I can defend my position. Email me at PM, we will talk.

  2. Ahreddan

    The remarkable answer :)

  3. Taramar

    Let's talk, to me is what to tell.

  4. Richard

    Sorry, I thought, and deleted the matter

  5. Wardell

    I went to the forum and saw this topic. May I help you?

  6. Kigall

    You are making a mistake. Let's discuss. Email me at PM, we will talk.

  7. Hanomtano

    What words ... The phenomenal phrase, admirable

  8. Kagagore

    The futesas!

Write a message