The project addresses methods for automated visual quality assessment and their validation beyond mean opinion scores. We propose to enhance the methods by including similarity awareness and predicted eye movement sequences, quantifying the perceptual viewing experience, and to apply the metrics for quality-aware media processing. Moreover, we will set up and apply media databases that are diverse in content and authentic in the distortions, in contrast to current scientific data sets.
How can crowdsourcing be applied to help generating very large video media data bases for research applications in quality of multimedia?
What is the performance of state-of-the-art video quality assessment methods that were designed based on small training sets for such large and diversified media databases?
Quality assessment in such extremely large empirical studies requires crowdsourcing. How should that be organized to achieve sufficient reliability and efficiency?
Are machine learning techniques suitable to identify the best performing video quality assessment metrics for given media content?
What statistical/perceptual features should be extracted to express similarity for this task?
How can one design new or hybrid strategies for video quality assessment based on the above?
Can we improve methods for image/video quality assessment by studying patterns of human visual attention and other perceptual aspects?
How can knowledge on human visual attention derived from eyetracking studies be incorporated into perceptual image/video quality assessment methods?
How can the quality assessment methods be applied in quality-aware media processing such as perceptual coding?
FOR SCIENTISTS
Projects
People
Publications
Graduate School
Equal Opportunity
FOR PUPILS
PRESS AND MEDIA