As to the man factors that influence IQ and whihc is closely related to the main topic of this thread (remember? "Pixel Density"!) are the algorithms that do noise reduction (be them either in camera are in some post-processing software.
Speaking in general here noise reduction algorithms tend to reduce detail. Or to put it in more technical terms: those algorithms try to smooth sample variations in brightness between adjacent pixels (ok that still is a very simple desciption of what is going on, but you capture the gist of it).
Now comes the interesting part but I first have to set up my assumptions:
- consider two identically sized sensors (let's say FF/FX)
- build from the same technology incl. micro lenses etc.
- sensor L is a low-density 9 MPix and sensor H is a high-density 36 MPix sensor
- assume AA-filters are perfectly matched to each sensors resolving power
- assume shooting with a lens that easily outresolves the 36 MPix sensor and is perfectly focussed (don't look sceptical, you can buy one of those!)
- assume just one moment (I'll come to this later) that four photo-sites of sensor H together have the same photon full-well capacity as one photo-site of sensor L.
The big Q is: What will happen, if you pixel-bin every four photo-sites of sensor H together by some clever noise reduction technique?
The resulting shot with sensor H should be of the same resolutio
n, the same noise
, the same dynamic
range as the the shot with sensor L (not processed with any noise-reduction technique)
If this is true the higher PD sensor H is much more flexible than sensor L as you can adjust resolution and noise through (post-)processing of the image to either match output from sensor L or just use the full high-res capabilites of H under ideal/bright conditions.
Well, does this sound familiar? Yes, it's exactly what many p&s cameras do and even the large DSLRs to it: they crank up noise-reduction (and thus lose detail and resolution) at higher ISO. So my train of thought must have some ruth to it.
Unfortunately though my assumption "that four photo-sites of sensor H together have the same photon full-well capacity as one photo-site of sensor L" is not exactly true in real-world chip etching: with the mind-bogglingly small structures on a chip there is a lot of space wasted betwenn the photo-sites. Sensor L would have pixel-spacing of roughly 10 micron, sensor H of 5 micron. Now if
you lose a fix 2 micron per photo-site
to borders, wiring etc. that would yield a net area of 64 square-micron per photo-site of sensor L and only 9 square-micron for H. Now pixel-binning four cells from H together gives you an effective 36 square-micron photo-site which unfortunately still is only 56% of the area of one photo-site of L. So in this case you lose an effective 1EV against the same-sized sensor with the lower PD
Well all is not lost because new manufacturing processes reduce the size of "dead" structures on a chip, try to put the wiring on the backside etc.. So we can assume to get closer to see much smaller losses from pixel binning / noise reduction versus the low pixel density sensors.
So will there be no Nikon D3 (=12MP) vs D3x (think 24-30MP) in the future? Well even in an ideal "borderless" sensor manufacturing process where the loss from binning is negligible one problem remains: You need to process much more pixels from a D3x sensor than from a D3 sensor and that in turn still costs more processor-time and transfer time to memory. As long as those speed-bumps exist there will always be lower density sport-shooter cameras with high fps and higher density landscape-shooter cameras with lower fps!
But if those speed barriers are removed, how knows...