How can crowdsourcing be applied to help generating very large video media data bases for research applications in quality of multimedia?
What is the performance of state-of-the-art video quality assessment methods that were designed based on small training sets for such large and diversified media databases?
Quality assessment in such extremely large empirical studies requires crowdsourcing. How should that be organized to achieve sufficient reliability and efficiency?
Are machine learning techniques suitable to identify the best performing video quality assessment metrics for given media content?
What statistical/perceptual features should be extracted to express similarity for this task?
How can one design new or hybrid strategies for video quality assessment based on the above?
Can we improve methods for image/video quality assessment by studying patterns of human visual attention and other perceptual aspects?
How can knowledge on human visual attention derived from eyetracking studies be incorporated into perceptual image/video quality assessment methods?
How can the quality assessment methods be applied in quality-aware media processing such as perceptual coding?