This proposal elaborates methods to circumvent the current roadblocks in the silicon imager pixel scaling and targets 16X resolution scaling and 3X sensitivity scaling for future camera’s on top of CMOS. The three leading companies in CMOS camera technologies have now reached 0.7 um pixel resolution and only the most optimistic roadmap promises 0.6 um pixel size in 3 to 4 years. It is absolutely obvious that further downscaling below 0.6 um for detectors in silicon will be very difficult due to problems such as: (1) rapid drop in sensitivity, (2) electrical and optical crosstalk, (3) problems with full well capacity, and (4) mechanical yield for technologies using trenches deeper than 6 um. Moreover, state-of-the-art pixel technologies have always used pixels larger than the wavelength of the light. The design of pixels smaller than the wavelength requires a fundamental different approach. The industry demand for further reduction of the pixel size is twofold: the mobile phone industry benefits as the silicon- and lens-system costs can be substantially reduced at equal image resolution and equal image quality. Moreover, also the high-end camera industry can benefit as 22 gigapixel would fit in future on a single silicon die. We will further downscale imagers on top of CMOS by combining state-of-the-art technologies. We propose a new principle that takes the wavelength of the light into account. First FDTD and analytical simulations demonstrate that our proposed concept can increase the pixel density with a factor of 16 (bringing the state-of-the-art pixel size from 0.7 micron to 0.2 micron) while also increasing the pixel signal-to-noise with a factor of at least 3. However, as the design space of the current concept is immense (more than 20 different parameters to optimize), there is ample of opportunity for fundamental research (in this proposal) to optimize the back-end-ofline structures, required for this device.