A05 | Image/Video Quality Assessment: From Test Databases to Similarity-Aware and Perceptual Dynamic Metrics

Prof. Dietmar Saupe, University of Konstanz
Email | Website

Dietmar Saupe

Prof. Andrés Bruhn, University of Stuttgart
Email | Website

Andrés Bruhn

Dr. Vlad Hosu, University of Konstanz – Email | Website

Oliver Wiedemann, University of Konstanz – Email | Website

The project addresses methods for automated visual quality assessment and their validation beyond mean opinion scores. We propose to enhance the methods by including similarity awareness and predicted eye movement sequences, quantifying the perceptual viewing experience, and to apply the metrics for quality-aware media processing. Moreover, we will set up and apply media databases that are diverse in content and authentic in the distortions, in contrast to current scientific data sets.

Research Questions

How can crowdsourcing be applied to help generating very large video media data bases for research applications in quality of multimedia?

What is the performance of state-of-the-art video quality assessment methods that were designed based on small training sets for such large and diversified media databases?

Quality assessment in such extremely large empirical studies requires crowdsourcing. How should that be organized to achieve sufficient reliability and efficiency?

Are machine learning techniques suitable to identify the best performing video quality assessment metrics for given media content?

What statistical/perceptual features should be extracted to express similarity for this task? 

How can one design new or hybrid strategies for video quality assessment based on the above?

Can we improve methods for image/video quality assessment by studying patterns of human visual attention and other perceptual aspects?

How can knowledge on human visual attention derived from eyetracking studies be incorporated into perceptual image/video quality assessment methods?

How can the quality assessment methods be applied in quality-aware media processing such as perceptual coding?

Fig. 1: Training Better Algorithms to Predict Subjective Quality Opinions.

Fig. 2: Saliency Driven Compression.

Publications

  1. F. Götz-Hahn, V. Hosu, and D. Saupe, “Critical Analysis on the Reproducibility of Visual Quality Assessment Using Deep Features,” PLoS ONE, vol. 17, no. 8, Art. no. 8, 2022, doi: 10.1371/journal.pone.0269715.
  2. H. Lin et al., “Large-Scale Crowdsourced Subjective Assessment of Picturewise Just Noticeable Difference,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 9, Art. no. 9, 2022, doi: 10.1109/TCSVT.2022.3163860.
  3. S. Su et al., “Going the Extra Mile in Face Image Quality Assessment: A Novel Database and Model,” CoRR, 2022, doi: 10.48550/ARXIV.2207.04904.
  4. M. Zameshina et al., “Fairness in generative modeling: do it unsupervised!,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, Jul. 2022, pp. 320--323. doi: 10.1145/3520304.3528992.
  5. H. Lin, H. Men, Y. Yan, J. Ren, and D. Saupe, “Crowdsourced Quality Assessment of Enhanced Underwater Images - a Pilot Study,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), Sep. 2022, pp. 1--4. doi: 10.1109/QoMEX55416.2022.9900904.
  6. J. Lou, H. Lin, D. Marshall, D. Saupe, and H. Liu, “TranSalNet: Towards perceptually relevant visual saliency prediction,” Neurocomputing, vol. 494, pp. 455–467, 2022, doi: https://doi.org/10.1016/j.neucom.2022.04.080.
  7. H. Lin, G. Chen, and F. W. Siebert, “Positional Encoding: Improving Class-Imbalanced Motorcycle Helmet use Classification,” in 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 1194–1198. doi: 10.1109/ICIP42928.2021.9506178.
  8. S. Su, V. Hosu, H. Lin, Y. Zhang, and D. Saupe, “KonIQ++: Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects,” in 32nd British Machine Vision Conference, 2021, pp. 1–12. [Online]. Available: https://www.bmvc2021-virtualconference.com/assets/papers/0868.pdf
  9. B. Roziere et al., “EvolGAN: Evolutionary Generative Adversarial Networks,” in Computer Vision -- ACCV 2020, Cham, Nov. 2021, pp. 679--694. doi: 10.1007/978-3-030-69538-5_41.
  10. H. Men, H. Lin, M. Jenadeleh, and D. Saupe, “Subjective Image Quality Assessment with Boosted Triplet Comparisons,” IEEE Access, vol. 9, pp. 138939–138975, 2021, doi: 10.1109/ACCESS.2021.3118295.
  11. B. Roziere et al., “Tarsier: Evolving Noise Injection in Super-Resolution GANs,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 7028–7035. doi: 10.1109/ICPR48806.2021.9413318.
  12. F. Götz-Hahn, V. Hosu, H. Lin, and D. Saupe, “KonVid-150k : A Dataset for No-Reference Video Quality Assessment of Videos in-the-Wild,” IEEE Access, vol. 9, pp. 72139--72160, 2021, doi: 10.1109/ACCESS.2021.3077642.
  13. H. Lin, M. Jenadeleh, G. Chen, U. Reips, R. Hamzaoui, and D. Saupe, “Subjective Assessment of Global Picture-Wise Just Noticeable Difference,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2020, pp. 1–6. doi: 10.1109/ICMEW46912.2020.9106058.
  14. V. Hosu et al., “From Technical to Aesthetics Quality Assessment and Beyond: Challenges and Potential,” in Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, Seattle, WA, USA, 2020, pp. 19–20. doi: 10.1145/3423268.3423589.
  15. V. Hosu, H. Lin, T. Sziranyi, and D. Saupe, “KonIQ-10k : An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment,” IEEE Transactions on Image Processing, vol. 29, pp. 4041--4056, 2020, doi: 10.1109/TIP.2020.2967829.
  16. O. Wiedemann, V. Hosu, H. Lin, and D. Saupe, “Foveated Video Coding for Real-Time Streaming Applications,” in 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123080.
  17. M. Jenadeleh, M. Pedersen, and D. Saupe, “Blind Quality Assessment of Iris Images Acquired in Visible Light for Biometric Recognition,” Sensors, vol. 20, no. 5, Art. no. 5, 2020, doi: 10.3390/s20051308.
  18. M. Lan Ha, V. Hosu, and V. Blanz, “Color Composition Similarity and Its Application in Fine-grained Similarity,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Piscataway, NJ, 2020, pp. 2548--2557. doi: 10.1109/WACV45572.2020.9093522.
  19. X. Zhao, H. Lin, P. Guo, D. Saupe, and H. Liu, “Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images,” in 2020 IEEE International Conference on Image Processing (ICIP), 2020, pp. 156–160. doi: 10.1109/ICIP40778.2020.9191203.
  20. T. Guha et al., “ATQAM/MAST’20: Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends,” in Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 2020, pp. 4758–4760. doi: 10.1145/3394171.3421895.
  21. H. Lin, J. D. Deng, D. Albers, and F. W. Siebert, “Helmet Use Detection of Tracked Motorcycles Using CNN-Based Multi-Task Learning,” IEEE Access, vol. 8, pp. 162073–162084, 2020, doi: 10.1109/ACCESS.2020.3021357.
  22. B. Roziere et al., “Evolutionary Super-Resolution,” in Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, Cancún, Mexico, 2020, pp. 151–152. doi: 10.1145/3377929.3389959.
  23. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123096.
  24. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Subjective annotation for a frame interpolation benchmark using artefact amplification,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00037-y.
  25. H. Lin et al., “SUR-FeatNet: Predicting the Satisfied User Ratio Curvefor Image Compression with Deep Feature Learning,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00034-1.
  26. O. Wiedemann and D. Saupe, “Gaze Data for Quality Assessment of Foveated Video,” Stuttgart, Germany, 2020. doi: 10.1145/3379157.3391656.
  27. C. Fan et al., “SUR-Net: Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743204.
  28. H. Men, H. Lin, V. Hosu, D. Maurer, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Motion Compensated Frame Interpolation,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743221.
  29. H. Lin, V. Hosu, and D. Saupe, “KADID-10k: A Large-scale Artificially Distorted IQA Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–3. doi: 10.1109/QoMEX.2019.8743252.
  30. V. Hosu, B. Goldlücke, and D. Saupe, “Effective Aesthetics Prediction with Multi-level Spatially Pooled Features,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9367–9375, 2019, doi: 10.1109/CVPR.2019.00960.
  31. D. Varga, D. Saupe, and T. Szirányi, “DeepRN: A Content Preserving Deep Architecture for Blind Image Quality Assessment,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2018, pp. 1–6. doi: 10.1109/ICME.2018.8486528.
  32. V. Hosu, H. Lin, and D. Saupe, “Expertise Screening in Crowdsourcing Image Quality,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 276–281. doi: https://dx.doi.org/10.1109/QoMEX.2018.8463427.
  33. M. Jenadeleh, M. Pedersen, and D. Saupe, “Realtime Quality Assessment of Iris Biometrics Under Visible Light,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPRW), CVPR Workshops, 2018, pp. 443–452. doi: 10.1109/CVPRW.2018.00085.
  34. H. Men, H. Lin, and D. Saupe, “Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2018, pp. 1–3. doi: 10.1109/QoMEX.2018.8463426.
  35. S. Egger-Lampl et al., “Crowdsourcing Quality of Experience Experiments,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 154–190. doi: 10.1007/978-3-319-66435-4_7.
  36. M. Spicker, F. Hahn, T. Lindemeier, D. Saupe, and O. Deussen, “Quantifying Visual Abstraction Quality for Stipple Drawings,” in Proceedings of the Symposium on Non-Photorealistic Animation and Rendering (NPAR), 2017, pp. 8:1-8:10. [Online]. Available: https://doi.org/http://dx.doi.org/10.1145/3092919.3092923
  37. U. Gadiraju et al., “Crowdsourcing Versus the Laboratory: Towards Human-centered Experiments Using the Crowd,” in Information Systems and Applications, incl. Internet/Web, and HCI, vol. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions, no. LNCS 10264, D. Archambault, H. Purchase, and T. Hossfeld, Eds. Springer International Publishing, 2017, pp. 6–26. doi: 10.1007/978-3-319-66435-4_2.
  38. V. Hosu et al., “The Konstanz natural video database (KoNViD-1k).,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2017, pp. 1–6. doi: 10.1109/QoMEX.2017.7965673.
  39. I. Zingman, D. Saupe, O. A. B. Penatti, and K. Lambers, “Detection of Fragmented Rectangular Enclosures in Very High Resolution Remote Sensing Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 8, Art. no. 8, 2016, doi: 10.1109/TGRS.2016.2545919.
  40. D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li, “Crowd Workers Proven Useful: A Comparative Study of Subjective Video Quality Assessment,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), 2016, pp. 1–2. [Online]. Available: https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/SaHaHo16.pdf
  41. V. Hosu, F. Hahn, O. Wiedemann, S.-H. Jung, and D. Saupe, “Saliency-driven Image Coding Improves Overall Perceived JPEG Quality,” in Proceedings of the Picture Coding Symposium (PCS), 2016, pp. 1–5. doi: 10.1109/PCS.2016.7906397.
  42. V. Hosu, F. Hahn, I. Zingman, and D. Saupe, “Reported Attention as a Promising Alternative to Gaze in IQA Tasks,” in Proceedings of  the 5th ISCA/DEGA Workshop on Perceptual Quality of Systems (PQS 2016), 2016, pp. 117–121. doi: 10.21437/PQS.2016-25.