What is its focal length? How does it's field of view and depth of field compare to a standard 50mm lens?
The eye doesn't zoom, but does the brain do any of this for us? Its seems like it when concentrating on something or if something attracts our attention.
At what speed can an object pass the eye before it appears blurred? Ie "shutter speed" equivalent.
How many "frames per second" can it process?
I believe the brain compensates for white balance? Does the eye have a natural "default" white balance setting which the brain then compensates for?
You've forgotten to ask about the main component of the human "camera": the central processing unit. This is where the eye leaves cameras in the dark ages(!). Look at the speed of the auto-focus function, for example; or the incredible face-recognition ability; or the in-built sound monitor that enables simultaneous auditory-visual tracking of the subject; the rangefinder binocular function that also includes a 3D function for free.
Of course, one could count the sensory cells (about 120 million per eye) and say that this is the number of pixels in our image. That's a starting point for your question, but it's not very helpful, is it? Build a camera with 250 megapixels worth of sensor, and we've improved on the eye? Not really. All we've done is resolved a scene into a near-continuous image.
I hope someone else tackles the physics elements in your question, because I think it'll be fascinating to see the comparison you ask for. However, if the eye looks a bit under-powered next to modern cameras (and that's quite possible) then I'd be inclined to think the fault is in the terms of reference, not in the eye.
I don't really think you can make a fair comparison as rod and cone distribution across the retina is not uniform and a lot of what we 'see' - particularly in our peripheral vision - is not really data but assumptions made by the brain to fill in the gaps.
Another aspect to consider is the variation in the way the sensors (rods and cones) are connected to the brain. Cones (red, green or blue vision for colour) have a 1:1 connectivity relationship where each cone has it's own dedicated neurone going into the brain. In the case of rods, it us usual for many rods to be connected to one neurone. This makes them collectively more sensitive as their generator potentials combine to make it easier to get over the threshold potential - making it easier to fire an action potential. This makes their vision fuzzy as it is delivered to the brain as a sort of average value.
In comparison, the components in digital camera sensors are like cones in that each component is represented distinctly from every other component.
Similarly for frames-per-second. Human persistance of vision is often quoted to be equivalent to 30fps.
However we see motion blurred images which our visual cortex can do things with, compared to, say, an LCD which offers more discrete, non-motion blurred images.
I suspect that this is one reason why people often express a preference for 60fps frame rates, or higher. 60fps can still look inferior when compared to higher display rates ( and then with computer screens you've also got the refresh rates and v-syncing, etc., to worry about ).
If you will forgive my saying so your question is a bit greedy. If there is sufficient interest in the forum, it may yet prove a good idea to resubmit it piecemeal.Meanwhile I would like to follow on Pete's reply by elaborating on a vitally important aspect of our visual efficiency. It applies in various forms to every example of advanced vision that I have data on in the animal kingdom. Cameras, in contrast, generally capture and process an entire picture in parallel. (There are technical exceptions, some of them extremely ingenious, others merely incidental, but they are not relevant here.)
The aspect that I have in mind is that we tend to concentrate on a small fraction of the visual field at any one time. You may imagine, looking at scenery, the page of a book, or your computer screen, that you are looking at the whole thing. However apart from some very sobering experiences involving accidents, invisible gorillas, camouflage and so on, many studies in experimental psychology show that we simply take for granted most of what we do not see properly. Conversely, we build up, fairly effectively, a far greater field of vision than the typical camera offers, but we do it piecemeal.
Some of this simply is the result of our having a very narrow field of sharp focus, and it is intriguingly like the effect that we have comparatively recently detected in the vision of jumping spiders.
Jumping spiders also have a very narrow field of sharp vision, in fact of any vision. For physical reasons have only a tiny retina, and the spider builds up a picture of its surroundings, in particular whatever it is concentrating on, my moving that bit of retina to intercept that part of the visual field that currently has its attention. One actually can see some of the action at the back of the eye when one looks from the front.
I always have liked jumping spiders anyway, but ever since I heard of this mechanism, I would have been drastically inhibited from harming one. It is an example of physiology that simply dwarfs the imagination and aesthetics of the typical plodding and uncreative products of science fiction films.
Just remember this: natural selection is smarter than you are!
Among the points you mentioned you asked about focal length. In a healthy child the focal length is adjustable between infinity and some 5 to 10 cm. In a healthy adult presbyopia sets in, till one finds oneself reading newspapers with one's arms at full extension.
Unlike a digital still camera or a digital movie camera, the human eye does not have a shutter and does not build up a picture in frames. What the brain gets is a continuous stream of information from each each individual cone or rod. The total data rate along the optic nerve from eye to brain is thought to be about 10Mbits per second. The density of the light receptors is not uniform across the field of view but has a high density in one area for gathering concentrated information and one area, where the optic nerve connects, that has no receptors. When looking at a scene, the human eye moves around looking at different areas and the brain puts the whole thing together. The brain will also fill in missing bits to make sense of the whole scene, not the sort of thing that a camera does.
Now, if we look at our ability to resolve images on a printed card, we find that a camera would require about 74Mpixels to achieve the same. If we now look at the human eye observing an everyday scene, with the eye moving around to take in the whole scene, we come to a requirement of around 576Mpixels.
The ISO number for a dark adapted human eye is around 800. Now full dark adaption takes about 30 minutes to take place. Once adapted you will find that the eye is integrating information over about 10 to 15 seconds. At low light levels the eye does not attempt to use colour, just black and white. The total range of brightnes tha the eye can see over is about 10^7 to 1 and the contrast ratio is about 10^4 depending upon total brightness.
The colour depth is difficult to work out but is given as equivalent to 8 bit at best but most likely 5 bit.
The focal length is about 25mm with an apature of 7mm making the eye f3.5.
The eye does not zoom but it does have one small area that has a higher density of receptors than the rest which can resolve better detail.
The eye does not have a shutter and does not construct frames but the response time of the receptors is such that it coresponds to a frame rate of about 15fps. This will give you an idea of blurring. Remember that the brain is able to process out a great deal of blurring.
I wonder how we get the perception of 3D vision. The iamge (intensity of light variations) formed on the retina is a 2D. The nerve impulses that correspond to this intensity variations (by means of optical sensors) are carried to the brain through optic nerves for processing. I'm surprised how the information about the depth is encoded in these nerve impusles. How amazing it is! We visualize it has a 3D picture. I'm curious to know how the brain recognizes the depth-information. Can you please help me in understanding this? Thank you once again!