Tohid Sedghi
Development of new imaging sensors arises the need for image processing techniques that can effectively fuse images from different sensors into a single coherent composition for interpretation. In order to make use of inherent redundancy and extended coverage of multiple sensors, we propose a multi-scale approach for pixel level image fusion. The ultimate goal is to reduce human/machine error in detection and recognition of objects. Results show that proposed methods has lots of superiority over traditional methods.