What kind of atomic and compound tasks, models, and metrics are suitable for evaluating the interaction with visual computing systems?
How can we find a good trade-off between their predictive power for specialized application tasks and their general applicability (e.g. their relation to atomic tasks and metrics)?
What kind of heuristics would be appropriate to tightly couple the compound tasks and metrics with specific characteristics of interaction devices and techniques?
What are appropriate software architectures for an evaluation environment?
What tools are necessary to support the integration for tasks and metrics designed within the project?
What tools support the ad-hoc evaluation as well as the integration of heuristics in (semi-) automated evaluations?