The Eye, and Camera Lenses

The Eye, and Camera Lenses

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

How do near sightedness, farsightedness, and normal sightedness work? If the eye is accustomed to one small focal point, how can it manage a wall of light? And also, how does it process the small pinpoint that it normally does?

I think I understand what your issue is here.

Often, in diagrams of lens systems and eyes, the diagram will only show rays parallel to the optical axis (see here). Although many optical systems are designed so they are optimal for this axis, off axis rays respond in a fairly predictable way too. This figure shows an off-axis point source, which focuses at a corresponding off-axis point on the other side of the lens.

So, there is "one focal point", as you say, for each position of the object relative to the optical axis. In optics we generally talk about a focal plane rather than a point, as the focal point only refers to cases of idealized point sources, lasers etc.

Now, the aperture (pupil) is a completely different matter. This provides a limit to how well an optical system can focus. When the aperture is small, it can focus really well, but not much light can get in. When it is large, the ability to focus is reduced but it allows one to see better in the dark (increasing the signal to noise ratio). Focusing requires all the rays from a source to end up at the same point on the retina/film, the aperture, when it is as small as possible "selects" only those rays that will end up focusing at exactly the right place (assuming the lens focuses to the right place, which it does not in the case of the short or long sighted). This selection cuts out a lot of the light, and often it is good to trade off the ability to focus for the amount of light. Without going into detail, this large aperture results in the focal ability of the optical being strongly dependent on how far away the object is (see this).

Your question is rather open ended, but I hope this is what you were getting at.

Eye anatomy: A closer look at the parts of the eye

When surveyed about the five senses — sight, hearing, taste, smell and touch — people consistently report that their eyesight is the mode of perception they value (and fear losing) most.

Despite this, many people don&apost have a good understanding of the anatomy of the eye, how vision works, and health problems that can affect the eye.

Read on for a basic description and explanation of the structure (anatomy) of your eyes and how they work (function) to help you see clearly and interact with your world.

Structure and Function of the Eyes

The structures and functions of the eyes are complex. Each eye constantly adjusts the amount of light it lets in, focuses on objects near and far, and produces continuous images that are instantly transmitted to the brain.

The orbit is the bony cavity that contains the eyeball, muscles, nerves, and blood vessels, as well as the structures that produce and drain tears. Each orbit is a pear-shaped structure that is formed by several bones.

An Inside Look at the Eye

The outer covering of the eyeball consists of a relatively tough, white layer called the sclera (or white of the eye).

Near the front of the eye, in the area protected by the eyelids, the sclera is covered by a thin, transparent membrane (conjunctiva), which runs to the edge of the cornea. The conjunctiva also covers the moist back surface of the eyelids and eyeballs.

Light enters the eye through the cornea, the clear, curved layer in front of the iris and pupil. The cornea serves as a protective covering for the front of the eye and also helps focus light on the retina at the back of the eye.

After passing through the cornea, light travels through the pupil (the black dot in the middle of the eye).

The iris—the circular, colored area of the eye that surrounds the pupil—controls the amount of light that enters the eye. The iris allows more light into the eye (enlarging or dilating the pupil) when the environment is dark and allows less light into the eye (shrinking or constricting the pupil) when the environment is bright. Thus, the pupil dilates and constricts like the aperture of a camera lens as the amount of light in the immediate surroundings changes. The size of the pupil is controlled by the action of the pupillary sphincter muscle and dilator muscle.

Behind the iris sits the lens. By changing its shape, the lens focuses light onto the retina. Through the action of small muscles (called the ciliary muscles), the lens becomes thicker to focus on nearby objects and thinner to focus on distant objects.

The retina contains the cells that sense light (photoreceptors) and the blood vessels that nourish them. The most sensitive part of the retina is a small area called the macula, which has millions of tightly packed photoreceptors (the type called cones). The high density of cones in the macula makes the visual image detailed, just as a high-resolution digital camera has more megapixels.

Each photoreceptor is linked to a nerve fiber. The nerve fibers from the photoreceptors are bundled together to form the optic nerve. The optic disk, the first part of the optic nerve, is at the back of the eye.

The photoreceptors in the retina convert the image into electrical signals, which are carried to the brain by the optic nerve. There are two main types of photoreceptors: cones and rods.

Cones are responsible for sharp, detailed central vision and color vision and are clustered mainly in the macula.

Rods are responsible for night and peripheral (side) vision. Rods are more numerous than cones and much more sensitive to light, but they do not register color or contribute to detailed central vision as the cones do. Rods are grouped mainly in the peripheral areas of the retina.

The eyeball is divided into two sections, each of which is filled with fluid. The pressure generated by these fluids fills out the eyeball and helps maintain its shape.

The front section (anterior segment) extends from the inside of the cornea to the front surface of the lens. It is filled with a fluid called the aqueous humor, which nourishes the internal structures. The anterior segment is divided into two chambers. The front (anterior) chamber extends from the cornea to the iris. The back (posterior) chamber extends from the iris to the lens. Normally, the aqueous humor is produced in the posterior chamber, flows slowly through the pupil into the anterior chamber, and then drains out of the eyeball through outflow channels located where the iris meets the cornea.

The back section (posterior segment) extends from the back surface of the lens to the retina. It contains a jellylike fluid called the vitreous humor.

Tracing the Visual Pathways

Nerve signals travel from each eye along the corresponding optic nerve and other nerve fibers (called the visual pathway) to the back of the brain, where vision is sensed and interpreted. The two optic nerves meet at the optic chiasm, which is an area behind the eyes immediately in front of the pituitary gland and just below the front portion of the brain (cerebrum). There, the optic nerve from each eye divides, and half of the nerve fibers from each side cross to the other side and continue to the back of the brain. Thus, the right side of the brain receives information through both optic nerves for the left field of vision, and the left side of the brain receives information through both optic nerves for the right field of vision. The middle of these fields of vision overlaps. It is seen by both eyes (called binocular vision).

An object is seen from slightly different angles by each eye so the information the brain receives from each eye is different, although it overlaps. The brain integrates the information to produce a complete picture.

Describe the structure of vertebrate eye.

All vertebrates possess complex camera eyes. A camera-type eye contains in the front a light-tight chamber and lens system, which focuses an image of the visual field on a light sensitive surface (the retina) in the back (Figure 2.27).

Figure 2.27 Internal Anatomy of the Human eye

The spherical eyeball is built of three layers,(1) a tough outer white sclera or sclerotic for support and protection.(2) middle choroid coat, containing blood vessels for nourishment, and (3) light-sensitive retina.

transparent anterior modification of the sclera. A circular, pigmented curtain, the iris, regulates the size of the light opening, the pupil. Just behind the iris is the lens, a transparent, elastic oval disc that, with the aid of ciliary muscles, can alter the curvature of the lens and bend light rays to focus an image on the retina. In terrestrial vertebrates the cornea actually does most of the bending of light rays, whereas the lens adjusts focus for near and far objects. Between cornea and lens is an outer chamber filled with watery aqueous humor between lens and retina is a much larger inner chamber filled with viscous vitreous humor. Fig. 2.27

The retina is composed of several cell layers. The outermost layer, closest to the sclera, consists of pigment cells. Adjacent to this layer are the photoreceptors, rods and cones. Approximately 125 million rods and 1 million cones are present in each human eye. Cones are primarily concerned with color vision in ample light rods, with colorless vision in dim light. Next is a network of intermediate neurons (bipolar, horizontal, and amacrine cells) that process and relay visual information from the photoreceptors to the ganglion cells whose axons form the optic nerve. The network permits much convergence, especially for rods. Information from several hundred rods may converge on a single ganglion cell, an adaptation that greatly increases the effectiveness of rods in dim light. Cones show very little convergence. By coordinating activities between different ganglion cells, and, adjusting the sensitivities of bipolar cells, horizontal and amacrine cells improve overall contrast and quality of the visual image.

The fovea centralis or fovea, the region of keenest vision, is located in the center of the retina, in direct line with the center of the lens and cornea. It contains only cones, a vertebrate specialization for diurnal (daytime) vision. The acuity of an animal’s eyes depends on the density of cones in the fovea. The human fovea and that of a lion contain approximately 150,000 cones per square millimeter. But many water and field birds have up to 1 million cones per square millimeter. Their eyes are as good as our eyes would be if aided by eight-power binoculars.

At the peripheral parts of retina only rods are found. Rods are high-sensitivity receptors for dim light. At night, the cone filled fovea is unresponsive to low levels of light and we functionally become color blind.

Humans and squids evolved the same eyes using the same genes

I see how you see. Credit: actor212, CC BY-NC-ND

Eyes and wings are among the most stunning innovations evolution has created. Remarkably these features have evolved multiple times in different lineages of animals. For instance, the avian ancestors of birds and the mammalian ancestors of bats both evolved wings independently, in an example of convergent evolution. The same happened for the eyes of squid and humans. Exactly how such convergent evolution arises is not always clear.

In a new study, published in Nature's Scientific Reports, researchers have found that, despite belonging to completely different lineages, humans and squid evolved through tweaks to the same gene.

Like all organs, the eye is the product of many genes working together. The majority of those genes provide information about how to make part of the eye. For example, one gene provides information to construct a light-sensitive pigment. Another gene provides information to make a lens.

Most of the genes involved in making the eye read like a parts list – this gene makes this, and that gene makes that. But some genes orchestrate the construction of the eye. Rather than providing instructions to make an eye part, these genes provide information about where and when parts need to be constructed and assembled. In keeping with their role in controlling the process of eye formation, these genes are called "master control genes".

The most important of master control genes implicated in making eyes is called Pax6. The ancestral Pax6 gene probably orchestrated the formation of a very simple eye – merely a collection of light-sensing cells working together to inform a primitive organism of when it was out in the open versus in the dark, or in the shade.

Complex beauty. Credit: pacificklaus, CC BY-NC

Today the legacy of that early Pax6 gene lives on in an incredible diversity of organisms, from birds and bees, to shellfish and whales, from squid to you and me. This means the Pax6 gene predates the evolutionary diversification of these lineages – during the Cambrian period, some 500m years ago.

The Pax6 gene now directs the formation of an amazing diversity of eye types. Beyond the simple eye, it is responsible for insects' compound eye, which uses a group of many light-sensing parts to construct a full image. It is also responsible for the type of eye we share with our vertebrate kin: camera eye, an enclosed structure with its iris and lens, liquid interior, and image-sensing retina.

In order to create such an elaborate structure, the activities Pax6 controlled became more complex. To accommodate this, evolution increased the number of instructions that arose from a single Pax6 gene.

Like all genes, the Pax6 gene is an instruction written in DNA code. In order for the code to work, the DNA needs to be read and then copied into a different kind of code. The other code is called RNA.

RNA code is interesting in that it can be edited. One kind of editing, called splicing, removes a piece from the middle of the code, and stitches the two ends together. The marvel of splicing is that it can be used to produce two different kinds of instructions from the same piece of RNA code. RNA made from the Pax6 can be spliced in just such a manner. As a consequence, two different kinds of instructions can be generated from the same Pax6 RNA.

In the new study, Atsushi Ogura at the Nagahama Institute of Bio-Science and Technology and colleagues found that Pax6 RNA splicing has been used to create a camera eye in a surprising lineage. It occurs in the lineage that includes squid, cuttlefish, and octopus – the cephalopods.

Cephalopods have a camera eye with the same features as the vertebrate camera eye. Importantly, the cephalopod camera eye arose completely independently from ours. The last common ancestor of cephalopods and vertebrates existed more than 500m years ago.

Pax6 RNA splicing in cepahlopods is a wonderful demonstration of how evolution fashions equivalent solutions via entirely different routes. Using analogous structures, evolution can provide remarkable innovations.

Talk Overview

Light microscopes use lenses. The basic properties of light, how light interacts with matter, the principles behind refractive lenses and how lenses form (magnified) images will be introduced in this talk.


  1. A light ray travels through water and hits a layer of glass at a right angle. What fraction of the light ray will be reflected at this interface (assume a refractive index of 1.33 for water and 1.52 for glass)?
    1. 4%
    2. 0.9%
    3. 0.45%
    4. 0.1%
    1. 45 degrees
    2. 39 degrees
    3. 34 degrees
    4. 29 degrees
    1. Velocity
    2. Frequency
    3. Wavelength
    1. 0.25 mm
    2. 25 mm
    3. 0.25 m
    4. 2.5 m


    Playlist: Microscopy Series

    The Bionic Eye

    Various Researchers
    Oct 1, 2014

    See full infographic: JPG | PDF © VIKTOR KOEN I n 1755, French physician and scientist Charles Leroy discharged the static electricity from a Leyden jar&mdasha precursor of modern-day capacitors&mdashinto a blind patient&rsquos body using two wires, one tightened around the head just above the eyes and the other around the leg. The patient, who had been blind for three months as a result of a high fever, described the experience like a flame passing downwards in front of his eyes. This was the first time an electrical device&mdashserving as a rudimentary prosthesis&mdashsuccessfully restored even a flicker of visual perception.

    More than 250 years later, blindness is still one of the most debilitating sensory impairments, affecting close to 40 million people worldwide. Many of these patients can be efficiently treated with surgery or medication, but some pathologies cannot be corrected with existing treatments. In particular, when light-receiving photoreceptor.

    In a healthy retina, photoreceptor cells—the rods and cones—convert light into electrical and chemical signals that propagate through the network of retinal neurons down to the ganglion cells, whose axons form the optic nerve and transmit the visual signal to the brain. (See illustration.) Prosthetic devices work at different levels downstream from the initial reception and biochemical conversion of incoming light photons by the pigments of photoreceptor rods and cones at the back of the retina. Implants can stimulate the bipolar cells directly downstream of the photoreceptors, for example, or the ganglion cells that form the optic nerve. Alternatively, for pathologies such as glaucoma or head trauma that compromise the optic nerve’s ability to link the retina to the visual centers of the brain, prostheses have been designed to stimulate the visual system at the level of the brain itself. (See illustration.)

    While brain prostheses have yet to be tested in people, clinical results with retinal prostheses are demonstrating that the implants can enable blind patients to locate and recognize objects, orient themselves in an unfamiliar environment, and even perform some reading tasks. But the field is young, and major improvements are still necessary to enable highly functional restoration of sight.

    Henri Lorach is currently a visiting researcher at Stanford University, where he focuses on prosthetic vision and retinal signal processing.

    Substitutes for Lost Photoreceptors

    By SEEING IN PIXELS: Photovoltaic arrays of pixels can be implanted on top of the retinal pigment epithelium (shown here in a rat eye, right), where they stimulate activity in the retinal neurons downstream of damaged photoreceptors. PALANKER LAB, STANFORD UNIVERSITY Daniel Palanker

    In the subretinal approach to visual prosthetics, electrodes are placed between the retinal pigment epithelium (RPE) and the retina. (See illustration.) There, they stimulate the nonspiking inner retinal neurons—bipolar, horizontal, and amacrine cells—which then transmit neural signals down the retinal network to the retinal ganglion cells (RGCs) that propagate to the brain via the optic nerve. Stimulating the retinal network helps preserve some aspects of the retina’s natural signal processing, such as the “flicker fusion” that allows us to see video as a smooth motion, even though it is composed of frames with static images adaptation to constant stimulation and the nonlinear integration of signals as they flow through the retinal network, a key aspect of high spatial resolution. Electrical pulses lasting several milliseconds provide selective stimulation of the inner retinal neurons and avoid direct activation of the ganglion cells and their axons, which would otherwise considerably limit patients’ ability to interpret the spatial layout of a visual scene.

    In the subretinal approach to visual pros­thetics, electrodes are placed between the retinal pigment epithelium and the retina, where they stimulate the nonspiking inner retinal neurons.

    The Boston Retinal Implant Project, a multidisciplinary team of scientists, engineers, and clinicians at research institutions across the U.S., is developing a retinal prosthesis that transmits information from a camera mounted on eyeglasses to a receiving antenna implanted under the skin around the eye using radiofrequency telemetry—technology similar to radio broadcast. The decoded signal is then delivered to an implanted subretinal electrode array via a cable that penetrates into the eye. The information delivered to the retina by this device is not related to direction of gaze, so to survey a scene a patient must move his head, instead of just his eyes.

    The Alpha IMS subretinal implant, developed by Retina Implant AG in Reutlingen, Germany, rectifies this problem by including a subretinal camera, which converts light in each pixel into electrical currents. This device has been successfully tested in patients with advanced retinitis pigmentosa and was recently approved for experimental clinical use in Europe. Visual acuity with this system is rather limited: most patients test no better than 20/1000, except for one patient who reached 20/550. 1 The Alpha IMS system also needs a bulky implanted power supply with cables that cross the sclera and requires complex surgery, with associated risk of complications.

    STIMULATING ARRAY: This prototype suprachoroidal array, which is implanted behind the choroid, can be larger than prostheses inserted in front of or behind the retina. COURTESY OF NESTLAIR PHOTOGRAPHY To overcome these challenges, my colleagues and I have developed a wireless photovoltaic subretinal prosthesis, powered by pulsed light. Our system includes a pocket computer that processes the images captured by a miniature video camera mounted on video goggles, which project these images into the eye and onto a subretinally implanted photodiode array. Photodiodes in each pixel convert this light into pulsed current to stimulate the nearby inner retinal neurons. This method for delivering the visual information is completely wireless, and it preserves the natural link between ocular movement and image perception.

    Our system uses invisible near-infrared (NIR, 880–915 nm) wavelengths to avoid the perception of bright light by the remaining functional photoreceptors. It has been shown to safely elicit and modulate retinal responses in normally sighted rats and in animals blinded by retinal degeneration. 2 Arrays with 70 micrometer pixels restored visual acuity in blind rats to half the natural level, corresponding to 20/250 acuity in human. Based on stimulation thresholds observed in these studies, we anticipate that pixel size could be reduced by a factor of two, improving visual acuity even further. Ease of implantation and tiling of these wireless arrays to cover a wide visual field, combined with their high resolution, opens the door to highly functional restoration of sight. We are commercially developing this system in collaboration with the French company Pixium Vision, and clinical trials are slated to commence in 2016.

    FOLLOW THE LIGHT: A blind patient navigates an obstacle course without the assistance of her guide-dog, thanks to a head-mounted camera and a backpack computer, which gather and process visual information before delivering a representation of the visual scene via her suprachoroidal retinal prosthesis. COURTESY OF NESTLAIR PHOTOGRAPHY Fabio Benfenati of the Italian Institute of Technology in Genoa and Guglielmo Lanzani at the institute’s Center for Nanoscience and Technology in Milan are also pursuing the subretinal approach to visual prostheses, developing a device based on organic polymers that could simplify implant fabrication. 3 So far, subretinal light-sensitive implants appear to be a promising approach to restoring sight to the blind.

    Daniel Palanker is a professor in the Department of Ophthalmology and Hansen Experimental Physics Laboratory at Stanford University.

    1. K. Stingl et al., “Functional outcome in subretinal electronic implants depends on foveal eccentricity,” Invest Ophthalmol Vis Sci, 54:7658-65, 2013.
    2. Y. Mandel et al., “Cortical responses elicited by photovoltaic subretinal prostheses exhibit similarities to visually evoked potentials,” Nat Commun, 4:1980, 2013.
    3. D. Ghezzi et al., “A polymer optoelectronic interface restores light sensitivity in blind rat retinas,” Nat Photonics, 7:400-06, 2013.

    Behind the Eye

    NEW SIGHT: A recipient of a prototype suprachoroidal prosthesis tests the device with Bionic Vision Australia (BVA) researchers. COURTESY OF BIONIC VISION AUSTRALIA'S BIONIC EYE PROJECT By Lauren Ayton and David Nayagam

    Subretinal prostheses implanted between the retina and the RPE, along with epiretinal implants that sit on the surface of the retina (see below), have shown good results in restoring some visual perception to patients with profound vision loss. However, such devices require technically challenging surgeries, and the site of implantation limits the potential size of these devices. Epiretinal and subretinal prostheses also face challenges with stability and the occurrence of adverse intraocular events, such as infection or retinal detachment. Due to these issues, researchers have been investigating a less invasive and more stable implant location: between the vascular choroid and the outer sclera. (See illustration.)

    Like subretinal prostheses, suprachoroidal implants utilize the bipolar cells and the retinal network down to the ganglion cells, which process the visual information before relaying it to the brain. But devices implanted in this suprachoroidal location can be larger than those implanted directly above or below the retina, allowing them to cover a wider visual field, ideal for navigation purposes. In addition, suprachoroidal electrode arrays do not breach the retina, making for a simpler surgical procedure that should reduce the chance of adverse events and can even permit the device to be removed or replaced with minimal damage to the surrounding tissues.

    Early engineering work on suprachoroidal device design began in the 1990s with research performed independently at Osaka University in Japan 1 and the Nano Bioelectronics and Systems Research Center of Seoul National University in South Korea. 2 Both these groups have shown proof of concept in bench testing and preclinical work, and the Japanese group has gone on to human clinical trials with promising results. 3 Subsequently, a South Korean collaboration with the University of New South Wales in Australia continued suprachoroidal device development.

    More recently, our groups, the Bionics Institute and the Centre for Eye Research Australia, working as part of the Bionic Vision Australia (BVA) partnership, ran a series of preclinical studies between 2009 and 2012. 4 These studies demonstrated the safety and efficacy of a prototype suprachoroidal implant, made up of a silicone carrier with 33 platinum disc-shaped electrodes that can be activated in various combinations to elicit the perception of rudimentary patterns, much like pixels on a screen. Two years ago, BVA commenced a pilot trial, in which researchers implanted the prototype in the suprachoroidal space of three end-stage retinitis pigmentosa patients who were barely able to perceive light. The electrode array was joined to a titanium connector affixed to the skull behind the ear, permitting neurostimulation and electrode monitoring without the need for any implanted electronics. 5 In all three patients, the device proved stable and effective, providing enough visual perception to better localize light, recognize basic shapes, orient in a room, and walk through mobility mazes with reduced collisions. 6 Preparation is underway for future clinical trials, which will provide subjects with a fully implantable device with twice the number of electrodes.

    Suprachoroidal prostheses can be larger than those implanted directly above or below the retina, allowing them to cover a wider visual field, ideal for navigation purposes.

    Meanwhile, the Osaka University group, working with the Japanese company NIDEK, has been developing an intrascleral prosthetic device, which, unlike the Korean and BVA devices, is implanted in between the layers of the sclera rather than in the suprachoroidal space. In a clinical trial of this device, often referred to as suprachoroidal-transretinal stimulation (STS), two patients with advanced retinitis pigmentosa showed improvement in spatial resolution and visual acuity over a four-week period following implantation. 3

    Future work will be required to fully investigate the difference in visual perception provided by devices implanted in the various locations in the eye, but the initial signs are promising that suprachoroidal stimulation is a safe and viable clinical option for patients with certain degenerative retinal diseases.

    Lauren Ayton is a research fellow and the bionic eye clinical program leader at the University of Melbourne’s Centre for Eye Research Australia. David Nayagam is a research fellow and the bionic eye chronic preclinical study leader at the Bionics Institute in East Melbourne and an honorary research fellow at the University of Melbourne.

    1. H. Sakaguchi et al., “Transretinal electrical stimulation with a suprachoroidal multichannel electrode in rabbit eyes,” Jpn J Ophthalmol, 48:256-61, 2004.
    2. J.A. Zhou et al., “A suprachoroidal electrical retinal stimulator design for long-term animal experiments and in vivo assessment of its feasibility and biocompatibility in rabbits,” J Biomed Biotechnol, 2008:547428, 2008.
    3. T. Fujikado et al., “Testing of semichronically implanted retinal prosthesis by suprachoroidal-transretinal stimulation in patients with retinitis pigmentosa,” Invest Ophthalmol Vis Sci, 52:4726-33, 2011.
    4. D.A.X. Nayagam et al., “Chronic electrical stimulation with a suprachoroidal retinal prosthesis: a preclinical safety and efficacy study,” PLOS ONE, 9:e97182, 2014.
    5. A.L. Saunders et al., “Development of a surgical procedure for implantation of a prototype suprachoroidal retinal prosthesis,” Clin & Exp Ophthalmol, 10.1111/ceo.12287, 2014
    6. M.N. Shivdasani et al., “Factors affecting perceptual thresholds in a suprachoroidal retinal prosthesis,” Invest Ophthalmol Vis Sci, in press.

    Shortcutting the Retina

    TINY IMPLANTS: The Argus II retinal implant, which was approved for sale in Europe in 2011 and in the U.S. in 2012, consists of a 3 mm x 5 mm 60-electrode array (shown here) and an external camera and video-processing unit. Users of this implant are able to perceive contrasts between light and dark areas. © PHILIPPE PSAILA/SCIENCE SOURCE By Mark Humayun, James Weiland, and Steven Walston

    Bypassing upstream retinal processing, researchers have developed so-called epiretinal devices that are placed on the anterior surface of the retina, where they stimulate the ganglion cells that are the output neurons of the eye. This strategy targets the last cell layer of the retinal network, so it works regardless of the state of the upstream neurons. (See illustration.)

    In 2011, Second Sight obtained approval from the European Union to market its epiretinal device, the Argus II Visual Prosthesis System, which allowed clinical trial subjects who had been blind for several years to recover some visual perception such as basic shape recognition and, occasionally, reading ability. The following year, the FDA approved the device, which uses a glasses-mounted camera to capture visual scenes and wirelessly transmits this information as electrical stimulation patterns to a 6 x 10 microelectrode array. The array is surgically placed in the macular region, responsible in a healthy retina for high-acuity vision, and covers an area of approximately 20° of visual space.

    A clinical trial showed that 30 patients receiving the device are able to more accurately locate a high-contrast square on a computer monitor, and when asked to track a moving high-contrast bar, roughly half are able to discriminate the direction of the bar’s movement better than without the system. 1 The increased visual acuity has also enabled patients to read large letters, albeit at a slow rate, and has improved the patients’ mobility. 2 With the availability of the Argus II, patients with severe retinitis pigmentosa have the first treatment that can actually improve vision. To date, the system has been commercially implanted in more than 50 patients.

    Several other epiretinal prostheses have shown promise, though none have received regulatory approval. Between 2003 and 2007, Intelligent Medical Implants tested a temporarily implanted, 49-electrode prototype device in eight patients, who reported seeing spots of light when electrodes were activated. Most of these prototype devices were only implanted for a few months, however, and with no integrated camera, patients could not activate the device outside the clinic, limiting the evaluation of the prosthesis’s efficacy. This group has reformed as Pixium Vision, the company currently collaborating with Daniel Palanker’s group at Stanford to develop a subretinal device, and has now developed a permanent epiretinal implant that is in clinical trials. The group is also planning trials of a 150-electrode device that it hopes will further improve visual resolution.

    Future developments in this area will aim to improve the spatial resolution of the stimulated vision increase the field of view that can be perceived and increase the number of electrodes. Smaller electrodes would activate fewer retinal ganglion cells, which would result in higher resolution. These strategies will be rigorously tested, and, if successful, may enable retinal prostheses that provide an even better view of the world.

    Mark Humayun is Cornelius J. Pings Chair in Biomedical Sciences at the University of Southern California, where James Weiland is a professor of ophthalmology and biomedical engineering. Steven Walston is a graduate student in the Bioelectronic Research Lab at the university.

    1. M.S. Humayun et al., “Interim results from the international trial of Second Sight’s visual prosthesis,” Ophthalmology, 119:779-88, 2012.
    2. L. da Cruz et al., “The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss,” Br J Ophthalmol, 97:632-36, 2013.

    Into the Brain

    DIRECT TO BRAIN: Gennaris’s bionic-vision system—which includes an eye glasses–mounted camera that receives visual information (below), a small computerized vision proces­sor (right), and 9 mm x 9 mm electronic tiles (far right) that are implanted into one hemisphere of the visual cortex at the back of the brain—is expected to enter human trials next year. COURTESY OF MVG By Collette Mann, Arthur Lowery, and Jeffrey V. Rosenfeld

    In addition to the neurons of the eye, researchers have also targeted the brain to stimulate artificial vision in humans. Early experimentation in epileptic patients with persistent seizures by German neurologists and neurosurgeons Otfrid Förster in 1929 and Fedor Krause and Heinrich Schum in 1931, showed that electrical stimulation of an occipital pole, the most posterior part of each brain hemisphere, resulted in sensations of light flashes, termed phosphenes. By the mid-1950s, Americans John C. Button, an osteopath and later MD, and Tracy Putnam, then Chief of Neurosurgery at Cedars-Sinai Hospital in Los Angeles, had implanted stainless steel wires connected to a simple stimulator into the cortices of four people who were blind, and the patients subsequently reported seeing flashes of light.

    The first functional cortical visual prosthesis was produced in England in 1968, when Giles Brindley, a physiologist, and Walpole Lewin, a neurosurgeon, both at Cambridge University, implanted 80 surface electrodes embedded in a silicone cap in the right occipital cortex of a patient. Each electrode connected to one of 80 corresponding extracranial radio receivers, which generated simple, distinctly located phosphene shapes. The patient could point with her hand to their location in her visual field. When more than one electrode at a time was stimulated, simple patterns emerged.

    The subsequent aim of the late William H. Dobelle was to provide patients with visual images comprising discrete sets of phosphenes—in other words, artificial vision. Dobelle had begun studying electrical stimulation of the visual cortex in the late 1960s with sighted patients undergoing surgery to remove occipital lobe tumors. He subsequently implanted surface-electrode arrays, first temporarily, then permanently, in the visual cortices of several blind volunteers. However, it was not until the early 2000s that the technology became available to connect a miniature portable camera and computer to the electrodes for practical conversion of real-world sights into electrical signals. With the resultant cortical stimulation, a patient was able to recognize large-print letters and the outline of images.

    COURTESY OF MVG To elicit phosphenes, however, the surface electrodes used in these early cortical prostheses required large electrical currents (

    3 mA–12 mA), which risked triggering epileptic seizures or debilitating migraines. The devices also required external cables that penetrated the skull, risking infection. Today, with the use of wireless technology, a number of groups are aiming to improve cortical vision prostheses, hoping to provide benefit to millions of people with currently incurable blindness.

    One promising device from our group is the Gennaris bionic-vision system, which comprises a digital camera on a glasses frame. Images are transmitted into a small computerized vision processor that converts the picture into waveform patterns, which are then transmitted wirelessly to small electronic tiles that are implanted into the visual cortex located in the back of the brain. Each tile houses 43 penetrating electrodes, and each electrode may generate a phosphene. The patterns of phosphenes will create 2-D outlines of relevant shapes in the central visual field. The device is in the preclinical stage, with the first human trials planned for next year, when we hope to implant four to six tiles per patient to stimulate patterns of several hundred phosphenes that patients can use to navigate the environment, identify objects in front of them, detect movement, and possibly read large print.

    The development in bionic vision devices is accelerating rapidly due to collaborative efforts using the latest silicon chip and electrode design, computer vision processing algorithms, and wireless technologies.

    Other groups currently developing cortical visual prostheses include the Illinois Institute of Technology, the University of Utah, the École Polytechnique de Montréal in Canada, and Miguel Hernández University in Spain. All these devices follow the same principal of inducing phosphenes that can be visualized by the patient. Many technical challenges must be overcome before such devices can be brought to the clinic, however, including the need to improve implantation techniques. In addition to the need for patient safety, accuracy and repeatability when inserting the device are important for maximum results.

    Development of bionic vision devices is accelerating rapidly due to collaborative efforts using the latest silicon chip and electrode design, computer vision processing algorithms, and wireless technologies. We are optimistic that a range of practical, safe, and effective bionic vision devices will be available over the next decade and that blind individuals will have the ability to “see” their world once again.

    Collette Mann is the clinical program coordinator of the Monash Vision Group in Melbourne, Australia, where Arthur Lowery, a professor of electrical engineering, is the director. Jeffrey V. Rosenfeld is head of the Division of Clinical Sciences & Department of Surgery at the Central Clinical School at Monash University and director of the Department of Neurosurgery at Alfred Hospital, which is also in Melbourne.

    Structure of the Eye

    The structure of the eye is an important topic to understand as one of the important sensory organs in the human body. It is mainly responsible for vision, differentiation of colour, and maintaining the biological clock of the human body. The human eye can be compared to a camera as both works by gathering, focusing, and transmitting the light through the lens for creating an image of an object.

    Functions of the Human Eye

    The human eyes are the most complicated sense organs in the human body. From the muscles and tissues to nerves and blood vessels, every part of the human eye is responsible for a certain action. Furthermore, contrary to popular belief, the eye is not perfectly spherical instead, it is two separate segments fused together. It is made up of several muscles and tissues that come together to form a roughly spherical structure. From an anatomical perspective, the human eye can be broadly classified into the external structure and internal structure.

    A New Bionic Eye Could Give Robots and the Blind 20/20 Vision

    A bionic eye could restore sight to the blind and greatly improve robotic vision, but current visual sensors are a long way from the impressive attributes of nature’s design. Now researchers have found a way to mimic its structure and create an artificial eye that reproduces many of its capabilities.

    A key part of what makes the eye’s design so powerful is its shape, but this is also one of the hardest things to mimic. The concave shape of the retina—the photoreceptor-laden layer of tissue at the back of the eye—makes it possible to pick up much more light as it passes through the curved lens than it would pick up if it was flat. But replicating this curved sensor array has proven difficult.

    Most previous approaches have relied on fabricating photosensors on flat surfaces before folding them or transplanting them onto curved ones. The problem with this approach is that it limits the density of photosensors, and therefore the resolution of the bionic eye, because space needs to be left between sensors to allow the transformation from flat to curved.

    In a paper published last week in Nature , though, researchers from Hong Kong University of Science and Technology devised a way to build photosensors directly into a h emispherical artificial retina . This enabled them to create a device that can mimic the wide field of view, responsiveness, and resolution of the human eye.

    “ The structural mimicry of Gu and colleagues’ artificial eye is certainly impressive, but what makes it truly stand out from previously reported devices is that many of its sensory capabilities compare favorably with those of its natural counterpart,” writes Hongrui Jiang, an engineer at the University of Wisconsin Madison, in a perspective in Nature .

    Key to the breakthrough was an ingenious way of implanting photosensors into a dome-shaped artificial retina. The team created a hemisphere of aluminum oxide peppered with densely-packed nanoscale pores. They then used vapor deposition to grow nanowires inside these pores made from perovskite, a type of photosensitive compound used in solar cells.

    These nanowires act as the artificial equivalent of photoreceptors. When light passes over them, they transmit electrical signals that are picked up by liquid metal wires attached to the back of the retina. The researchers created another hemisphere made out of aluminum with a lens in the cente r to act as the front of the eye, and filled the space in between it and the retina with a n ionic liquid designed to mimic the fluid aqueous humor that makes up the bulk of the human eye.

    The researchers then hooked up the bionic eye to a computer and demonstrated that it could recognize a series of letters. While the artificial eye couldn’t quite achieve the 130-degree field of view of a human eye, it managed 100 degrees, which is a considerable improvement over the roughly 70 degrees a flat sensor can achieve.

    In other areas, though, the approach has the potential to improve on biological eyes. The researchers discovered that the nanowires’ photodetectors were actually considerably more responsive. They were activated in as little as 19.2 milliseconds and recovered to a point where they could be activated again in 23.9 milliseconds. R esponse and recovery times in human photoreceptors range from 40 to 150 milliseconds .

    The density of nanowires in the artificial retina is also more than 10 times that of photoreceptors in the human eye , suggesting that the technology could ultimately achieve far higher resolution than nature.

    The big limitation at the moment is wiring up these photosensors. The liquid metal connections are currently two orders of magnitude wider than the nanowires, so each one connects to many photosensors, and it’s only possible to attach 100 wires to the back of the retina. That means that despite the density of photosensors, the eye has a resolution of only 100 pixels.

    The researchers did devise a way to use magnetic fields to connect nickel microneedles to just three nanowires at a time, but the process is a complicated manual one that would be impossible to scale up to the millions of nanowires present in the artificial retina. Still, the device represents a promising proof-of-concept that suggests that we may soon be able to replicate and even better one of nature’s most exquisite designs.

    “ Given these advances, it seems feasible that we might witness the wide use of artificial and bionic eyes in daily life within the next decade,” writes Jiang.

    Extended data

    Extended Data Fig. 1 Characterization of the proteome of wild-type mice.

    (a) Graphs show different measurements from C3HeB/FeJ (squares) and C57Bl/6 J (circles) mice. Top, body weight (filled symbols) and lens wet weight (open symbols). Middle and bottom, amounts of extractable water-soluble and -insoluble protein per lens wet weight was determined by the Bradford protein assay, using BSA as a standard. Data are given as mean and s.d. per lens pair (left and right eye) of one animal. Please see Supplementary Table 2 and 3 for exact sample size. (b) 2-DE map of the water-soluble fraction of lenses from newborn wild-type mice. For densitometric analysis (n = 2 individual animals), only the low molecular crystallin region was considered (see excerpt). The fraction of α- (gold), β- (orange) and γ-crystallins (blue) was determined up to 12 months of age, for the water-soluble (solid bars) and -insoluble (shaded bars) fraction. Please see Supplementary Data 1 for numerical values. Scheimpflug images of C57Bl/6 J lenses at 3, 6 and 12 months are shown as a reference for wild-type behavior.

    Extended Data Fig. 2 Changes in the proportion of crystallin isoforms during aging in wild-type and mutant mice.

    Graphs show relative proportions of single crystallin isoforms, plotted against age, in the water-soluble or -insoluble fractions from whole lens extracts (C3HeB/FeJ (black squares), αA mutant (red circles), γD mutant (red triangles) and C57Bl/6 J (black squares), βA2 mutant (red circles)). Data are shown as mean and s.d. for n = 2 individual animals. Data behind graphs are in Supplementary Data 1. The sum of the isoform proportions equals 1 and the analyses were performed for the water-soluble and water-insoluble fraction.

    Extended Data Fig. 3 Characterization of the protein complexes found in the lens extracts from wild-type mice.

    (a) Top, SEC-HPLC profiles of water-soluble proteins in whole lens extracts for both wild-type strains. The elution profiles of samples from two mice (pair of lenses, each) are shown (as solid and traced lines). Bottom, the integrated signal was plotted against the age to follow fractional changes over time fractions are color-coded as shown on top. Data are means and s.d. from the 2 replicates shown on top. (b) Top, 2-DE and EM analyses to check for the protein composition of isolated HMW and αL fraction. Scale bar: 50 nm. Bottom, crystallin proportions were determined by densitometric analysis of the 2-DE gels (light grey: αA-crystallin, dark grey: αB-crystallin, orange: β-crystallins, blue: γ-crystallins). (c) SV-AUC profiles showing analyses of αL particles from two mice each, revealing a slight increase in their sedimentation coefficient (s(20,w)) during aging (grey and black lines). Error bars result from averaging consecutive scans during SV-AUC experiments. Apparent maxima of the g(s*) distribution show the most populated particle species. (d) Scattering profiles and Guinier plots of lens extracts from C3HeB/FeJ wild-type mice at different ages: 1 (black), 3 (grey) and 9 (red) months.

    Extended Data Fig. 4 2-DE of crystallin fractions from wild-type and mutant mice.

    After separation by SEC-HPLC, the individual fractions were checked for their protein composition using 2-DE. Note that the type of ampholyte used (Pharmalyte) caused a staining artifact but that did not influence IEF or second dimension separation of the proteins, as shown at the bottom right. For densitometric analysis, the background signal of 2-DE gels was subtracted using the rolling ball background correction with 100 px in ImageJ. For each individual fraction, one 2-DE experiment was performed.

    Extended Data Fig. 5 Cataract-associated crystallin mutants are thermodynamically destabilized.

    (a) Far-UV CD spectroscopy analyses of secondary structure analysis of recombinant wild-type (black) and mutant (red) crystallins. Due to low solubility, no spectra were recorded for βA2-S47P. (b) NanoDSF measurements, recording the optical density to detect protein aggregation. The V124E mutation did not change the aggregation propensity of αA-crystallin, but decreased the chaperone activity towards the model substrate L-MDH (inset). Data shown as mean and s.d. from n = 3 independent samples, obtained with recombinant protein from the same batch. Wild-type crystallin (black) and crystallin mutant (red). Experiments were carried out in PBS, but for the βA2 mutant. Due to the low stability of βA2 S47P, Tris buffer was used and L-arginine was added for the measurements. The measurement with wild-type βA2 in PBS are shown in black and in grey for the Tris/Arginine buffer as a reference (c) SEC-MALS/ -HPLC and SV-AUC (inset) were employed to characterize the quaternary structure of wild-type (black) and mutant (red) crystallins. Note: Due to the low stability of βA2-S47P, quantitative unfolding of the protein occurs during SV-AUC runs. Data shown resembles a representative distribution from three independent samples, obtained with recombinant protein from the same batch. Error bars result from averaging consecutive scans during SV-AUC experiments. Apparent maxima of the g(s*) distribution show the most populated particle species.

    Extended Data Fig. 6 Characterization of the proteome of mutant mice.

    (a) Left, body weight (filled symbols) and lens wet weight (open symbols), per pair of eye lenses from one individual: αA mutant (squares), βA2 mutant (circles) and γD mutant (triangles). Middle and right, amount of water-soluble and -insoluble protein was determined by the Bradford protein assay using BSA as a standard. Data are given as mean ± SD of biological replicates (animals or pairs of eye lenses please see Supplementary Table 2 and 3 for exact sample size). (b) Top, 2-DE map of the water-soluble fraction of lenses from newborn wild-type mice. Bottom, densitometric analysis only the low molecular crystallin region was considered (see excerpt). The fraction of α- (gold), β- (orange) and γ-crystallins (blue) was determined up to 12 months of age, for the water-soluble (solid bars) and -insoluble (shaded bars) fraction. Please see Supplementary Data 1 for numerical values.

    Extended Data Fig. 7 Comparison of extraction efficiency using different buffers.

    2-DE maps showing comparison of the eye lens proteome pattern obtained for the water-insoluble fraction of the three mutants studied, solubilized in the presence of 6 M urea or 7 M urea plus 2 M thiourea. One experiment was performed per mutant, age and extraction buffer.