B04 | Adaptive Algorithms for Motion Estimation

Prof. Andrés Bruhn, University of Stuttgart
Email | Website

Andrés Bruhn

Prof. Albrecht Schmidt, LMU Munich
Email | Website

Albrecht Schmidt

Lukas Mehl, University of Stuttgart – Email  |  Website

Jenny Schmalfuss, University of Stuttgart – Email  |  Website 

This project aims at developing novel algorithms for optical flow (2D motion) and scene flow (3D motion) estimation. Instead of relying on fixed assumptions, the goal is to adaptively integrate prior knowledge and other available information in order to design approaches that are not only highly accurate but also generalize well across datasets.

Fig. 1: Optical flow estimation on the automotive KITTI dataset using the method of Maurer et al. (BMVC 2018).

Research Questions

How can we develop algorithms for adaptive motion estimation of high accuracy and generalizability that are applicable in the wild?

To which extent can concepts be generalized from the 2D to the 3D domain and how can additional constraints imposed by the 3D world improve the estimation?

How can we transfer the benefits of adaptive algorithms to specific applications?

Publications

  1. L. Mehl, A. Jahedi, J. Schmalfuss, and A. Bruhn, “M-FUSE: Multi-frame Fusion for Scene Flow Estimation,” in Proc. Winter Conference on Applications of Computer Vision (WACV), in Proc. Winter Conference on Applications of Computer Vision (WACV). Jan. 2023. doi: 10.48550/arXiv.2207.05704.
  2. L. Mehl, J. Schmalfuss, A. Jahedi, Y. Nalivayko, and A. Bruhn, “Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Jun. 2023, pp. 4981–4991. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2023/html/Mehl_Spring_A_High-Resolution_High-Detail_Dataset_and_Benchmark_for_Scene_Flow_CVPR_2023_paper.html
  3. J. Schmalfuss, L. Mehl, and A. Bruhn, “Distracting Downpour: Adversarial Weather Attacks for Motion Estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Oct. 2023, pp. 10106–10116. [Online]. Available: https://openaccess.thecvf.com/content/ICCV2023/html/Schmalfuss_Distracting_Downpour_Adversarial_Weather_Attacks_for_Motion_Estimation_ICCV_2023_paper.html
  4. J. Schmalfuss, E. Scheurer, H. Zhao, N. Karantzas, A. Bruhn, and D. Labate, “Blind image inpainting with sparse directional filter dictionaries for lightweight CNNs,” Journal of Mathematical Imaging and Vision (JMIV), vol. 65, pp. 323--339, 2023, doi: 10.1007/s10851-022-01119-6.
  5. T. Krake, A. Bruhn, B. Eberhardt, and D. Weiskopf, “Efficient and Robust Background Modeling with Dynamic Mode Decomposition,” Journal of Mathematical Imaging and Vision (2022), 2022, doi: 10.1007/s10851-022-01068-0.
  6. M. Philipp, N. Bacher, S. Sauer, F. Mathis-Ullrich, and A. Bruhn, “From Chairs To Brains: Customizing Optical Flow For Surgical Activity Localization,” in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, Mar. 2022, pp. 1–5. doi: 10.1109/ISBI52829.2022.9761704.
  7. J. Schmalfuss, L. Mehl, and A. Bruhn, “Attacking Motion Estimation with Adversarial Snow,” in Proc. ECCV Workshop on Adversarial Robustness in the Real World (AROW), in Proc. ECCV Workshop on Adversarial Robustness in the Real World (AROW). 2022. doi: 10.48550/arXiv.2210.11242.
  8. J. Schmalfuss, P. Scholze, and A. Bruhn, “A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow,” Proceedings of the European Conference on Computer Vision (ECCV), Oct. 2022.
  9. A. Jahedi, L. Mehl, M. Rivinius, and A. Bruhn, “Multi-Scale RAFT: combining hierarchical concepts for learning-based optical flow estimation,” Proceedings of the IEEE International Conference on Image Processing (ICIP), pp. 1236–1240, Oct. 2022, doi: 10.1109/ICIP46576.2022.9898048.
  10. L. Mehl, C. Beschle, A. Barth, and A. Bruhn, “An Anisotropic Selection Scheme for Variational Optical Flow Methods with Order-Adaptive Regularisation,” in Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), in Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM). Springer, 2021, pp. 140--152. doi: 10.1007/978-3-030-75549-2_12.
  11. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Interpolated Slow-Motion Videos Based on a Novel Database,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). 2020, pp. 1–6. doi: 10.1109/QoMEX48832.2020.9123096.
  12. H. Men, V. Hosu, H. Lin, A. Bruhn, and D. Saupe, “Subjective annotation for a frame interpolation benchmark using artefact amplification,” Quality and User Experience, vol. 5, no. 1, Art. no. 1, 2020, doi: 10.1007/s41233-020-00037-y.
  13. K. Kurzhals et al., “Visual Analytics and Annotation of Pervasive Eye Tracking Video,” in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA), in Proceedings of the Symposium on Eye Tracking Research & Applications (ETRA). Stuttgart, Germany: ACM, 2020, pp. 16:1-16:9. doi: 10.1145/3379155.3391326.
  14. H. Men, H. Lin, V. Hosu, D. Maurer, A. Bruhn, and D. Saupe, “Visual Quality Assessment for Motion Compensated Frame Interpolation,” in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), in Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2019, pp. 1–6. doi: 10.1109/QoMEX.2019.8743221.
  15. D. Maurer and A. Bruhn, “ProFlow: Learning to Predict Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC), vol. 86:1-86:13. BMVA Press, 2018. doi: arXiv:1806.00800.
  16. D. Maurer, Y. C. Ju, M. Breuß, and A. Bruhn, “Combining Shape from Shading and Stereo: A Joint Variational Method for Estimating Depth, Illumination and Albedo,” International Journal of Computer Vision, vol. 126, no. 12, Art. no. 12, 2018, doi: 10.1007/s11263-018-1079-1.
  17. D. Maurer, N. Marniok, B. Goldluecke, and A. Bruhn, “Structure-from-motion-aware PatchMatch for Adaptive Optical Flow Estimation,” in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., in Computer Vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, vol. 11212. , Springer International Publishing, 2018, pp. 575–592. doi: 10.1007/978-3-030-01237-3_35.
  18. D. Maurer, M. Stoll, and A. Bruhn, “Directional Priors for Multi-Frame Optical Flow,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, 2018, pp. 106:1-106:13. [Online]. Available: http://bmvc2018.org/contents/papers/0377.pdf
  19. D. Maurer, M. Stoll, S. Volz, P. Gairing, and A. Bruhn, “A Comparison of Isotropic and Anisotropic Second Order Regularisers for Optical Flow,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302, F. Lauze, Y. Dong, and A. B. Dahl, Eds., in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302. , Springer International Publishing, 2017, pp. 537–549. doi: 10.1007/978-3-319-58771-4_43.
  20. D. Maurer, A. Bruhn, and M. Stoll, “Order-adaptive and Illumination-aware Variational Optical Flow Refinement,” in Proceedings of the British Machine Vision Conference (BMVC), in Proceedings of the British Machine Vision Conference (BMVC). BMVA Press, 2017, pp. 150:1-150:13. doi: 10.5244/C.31.150.
  21. D. Maurer, M. Stoll, and A. Bruhn, “Order-adaptive Regularisation for Variational Optical Flow: Global, Local and in Between.,” in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, F. Lauze, Y. Dong, and A. B. Dahl, Eds., in Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol. 10302. Springer International Publishing, 2017, pp. 550–562. doi: 10.1007/978-3-319-58771-4_44.
  22. K. Kurzhals, M. Stoll, A. Bruhn, and D. Weiskopf, “FlowBrush: Optical Flow Art,” in Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH)., in Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (EXPRESSIVE, co-located with SIGGRAPH). 2017, pp. 1:1-1:9. doi: 10.1145/3092912.3092914.
  23. M. Stoll, D. Maurer, and A. Bruhn, “Variational Large Displacement Optical Flow Without Feature Matches.,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, M. Pelillo and E. R. Hancock, Eds., in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, vol. 10746. Springer International Publishing, 2017, pp. 79–92. doi: 10.1007/978-3-319-78199-0_6.
  24. M. Stoll, D. Maurer, S. Volz, and A. Bruhn, “Illumination-aware Large Displacement Optical Flow,” in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, vol. 10746, M. Pelillo and E. R. Hancock, Eds., in Energy Minimization Methods in Computer Vision and Pattern Recognition. EMMCVPR 2017. Lecture Notes in Computer Science, vol. 10746. , Springer International Publishing, 2017, pp. 139–154. doi: 10.1007/978-3-319-78199-0_10.

Project Group A

Models and Measures

 

Completed

 

Project Group B

Adaptive Algorithms

 

Completed

 

Project Group C

Interaction

 

Completed

 

Project Group D

Applications

 

Completed