Interesting thread which I missed seeing until today. Maybe I shouldn't do this because I just haven't the time to read the whole paper
but isn't there a noise issue? I did see a brief section where it was claimed that signal to noise ratio wasn't impacted because information from several sensor sites was added together but that rather assumes that sensor read noise reduces as pixel size reduces and I think that is not a good assumption.
On a more general note, one is asking the sensor not only to record information about intensity, as cameras currently do, but also about direction. Thus the sensor has to record more information but with the same number of electrons as would be generated by the light striking a conventional sensor (on the assumption that the base ISO
is similar). My grasp of information theory mat be pretty tenuous but that says to me that the final image must be noisier in some fashion. Maybe the noise is less intrusive than the chroma/luminance noise we all see from time to time but we'll have to see.
If it all works as advertised then is it aimed at the DSLR owner who wants to adjust focus after the event or could it be aimed at the compact camera owner to synthesis shallow depth of field? Maybe if I'd had more time to read I wouldn't have to ask these questions but the Lytro site seems pretty short on answers as well.