V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze, “Is there an Uncanny Valley of Virtual Animals? A Quantitative and Qualitative Investigation,”
International Journal of Human-Computer Studies, vol. 111, pp. 49–61, 2018, doi:
10.1016/j.ijhcs.2017.11.003.
Abstract
Approaching a high degree of realism, android robots, and virtual humans may evoke uncomfortable feelings. Due to technologies that increase the realism of human replicas, this phenomenon, which is known as the uncanny valley, has been frequently highlighted in recent years by researchers from various fields. Although virtual animals play an important role in video games and entertainment, the question whether there is also an uncanny valley for virtual animals has been little investigated. This paper examines whether very realistic virtual pets tend to cause a similar aversion as humanlike characters. We conducted two empirical studies using cat renderings to investigate the effects of realism, stylization, and facial expressions of virtual cats on human perception. Through qualitative feedback, we gained deeper insight into the perception of realistic computer-generated animals. Our results indicate that depicting virtual animal-like characters at realism levels used in current video games causes negative reactions just as the uncanny valley predicts for humanlike characters. We conclude design implication to avoid that sensation and suggest that virtual animals should either be given a completely natural or a stylized appearance. We propose to further examine the uncanny valley by the inclusion of artificial animals.BibTeX
P. Knierim, V. Schwind, A. M. Feit, F. Nieuwenhuizen, and N. Henze, “Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 345:1–345:9. doi:
10.1145/3173574.3173919.
Abstract
Entering text is one of the most common tasks when interacting with computing systems. Virtual Reality (VR) presents a challenge as neither the user's hands nor the physical input devices are directly visible. Hence, conventional desktop peripherals are very slow, imprecise, and cumbersome. We developed a apparatus that tracks the user's hands, and a physical keyboard, and visualize them in VR. In a text input study with 32 participants, we investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With our apparatus, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. Inexperienced typists profited from semi-transparent hands, which enabled them to type just 5.6 WPM slower than with a regular desktop setup. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists, to enable a high typing performance.BibTeX
T. Dingler, R. Rzayev, A. S. Shirazi, and N. Henze, “Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, R. L. Mandryk, M. Hancock, M. Perry, and A. L. Cox, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 419:1–419:12. doi:
10.1145/3173574.3173993.
Abstract
In the era of ubiquitous computing, people expect applications to work across different devices. To provide a seamless user experience it is therefore crucial that interfaces and interactions are consistent across different device types. In this paper, we present a method to create gesture sets that are consistent and easily transferable. Our proposed method entails 1) the gesture elicitation on each device type, 2) the consolidation of a unified gesture set, and 3) a final validation by calculating a transferability score. We tested our approach by eliciting a set of user-defined gestures for reading with Rapid Serial Visual Presentation (RSVP) of text for three device types: phone, watch, and glasses. We present the resulting, unified gesture set for RSVP reading and show the feasibility of our method to elicit gesture sets that are consistent across device types with different form factors.BibTeX
V. Schwind, P. Knierim, C. Tasci, P. Franczak, N. Haas, and N. Henze, “‘These are not my hands!’: Effect of Gender on the Perception of Avatar Hands in Virtual Reality,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems, G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 1577–1582. doi:
10.1145/3025453.3025602.
Abstract
Rendering the user's body in virtual reality increases immersion and presence the illusion of "being there". Recent technology enables determining the pose and position of the hands to render them accordingly while interacting within the virtual environment. Virtual reality applications often use realistic male or female hands, mimic robotic hands, or cartoon hands. However, it is unclear how users perceive different hand styles. We conducted a study with 14 male and 14 female participants in virtual reality to investigate the effect of gender on the perception of six different hands. Quantitative and qualitative results show that women perceive lower levels of presence while using male avatar hands and male perceive lower levels of presence using non-human avatar hands. While women dislike male hands, men accept and feel presence with avatar hands of both genders. Our results highlight the importance of considering the users' diversity when designing virtual reality experiences.BibTeX
V. Schwind, P. Knierim, L. L. Chuang, and N. Henze, “‘Where’s Pinky?’: The Effects of a Reduced Number of Fingers in Virtual Reality,” in
Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY), B. A. M. Schouten, P. Markopoulos, Z. O. Toups, P. A. Cairns, and T. Bekker, Eds., in Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY). ACM, 2017, pp. 507–515. doi:
10.1145/3116595.3116596.
Abstract
The hands of one's avatar are possibly the most visible aspect when interacting in virtual reality (VR). As video games in VR proliferate, it is important to understand how the appearance of avatar hands influence the user experience. Designers of video games often stylize hands and reduce the number of fingers of game characters. Previous work shows that the appearance of avatar hands has significant effects on the user's presence - the feeling of `being' and `acting' in VR. However, little is known about the effects of missing fingers of an avatar in VR. In this paper, we present a study (N=24) that investigated the effect of hand representations by parametrically varying the number of fingers of abstract and realistically rendered hands. We show that decreasing the number of fingers of realistic hands leads to significantly lower levels of presence, which is not the case for abstract hands. Qualitative feedback collected through think-aloud and video revealed potential reasons for the different assessment of realistic and abstract hands with fewer fingers in VR. We contribute design implications and recommend considering the human-likeness when a reduction of the number of fingers of avatar hands is desired.BibTeX
H. V. Le, V. Schwind, P. Göttlich, and N. Henze, “PredicTouch: A System to Reduce Touchscreen Latency using Neural Networks and Inertial Measurement Units,” in
Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS), S. Subramanian, J. Steimle, R. Dachselt, D. M. Plasencia, and T. Grossman, Eds., in Proceedings of the ACM International Conference on Interactive Surfaces and Spaces (ISS). ACM, 2017, pp. 230–239. doi:
10.1145/3132272.3134138.
Abstract
Touchscreens are the dominant input mechanism for a variety of devices. One of the main limitations of touchscreens is the latency to receive input, refresh, and respond. This latency is easily perceivable and reduces users' performance. Previous work proposed to reduce latency by extrapolating finger movements to identify future movements - albeit with limited success. In this paper, we propose PredicTouch, a system that improves this extrapolation using inertial measurement units (IMUs). We combine IMU data with users' touch trajectories to train a multi-layer feedforward neural network that predicts future trajectories. We found that this hybrid approach (software: prediction, and hardware: IMU) can significantly reduce the prediction error, reducing latency effects. We show that using a wrist-worn IMU increases the throughput by 15% for finger input and 17% for a stylus.BibTeX
V. Schwind, K. Wolf, and N. Henze, “FaceMaker - A Procedural Face Generator to Foster Character Design Research,” O. Korn and N. Lee, Eds., Springer International Publishing, 2017, pp. 95–113. doi:
10.1007/978-3-319-53088-8_6.
BibTeX
P. Knierim
et al., “Tactile Drones - Providing Immersive Tactile Feedback in Virtual Reality through Quadcopters,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), G. Mark, S. R. Fussell, C. Lampe, m. c. schraefel, J. P. Hourcade, C. Appert, and D. Wigdor, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2017, pp. 433–436. doi:
https://doi.org/10.1145/3027063.3050426.
Abstract
Head-mounted displays for virtual reality (VR) provide high-fidelity visual and auditory experiences. Other modalities are currently less supported. Current commercial devices typically deliver tactile feedback through controllers the user holds in the hands. Since both hands get occupied and tactile feedback can only be provided at a single position, research and industry proposed a range of approaches to provide richer tactile feedback. Approaches, such as tactile vests or electrical muscle stimulation, were proposed, but require additional body-worn devices. This limits comfort and restricts provided feedback to specific body parts. With this Interactivity installation, we propose quadcopters to provide tactile stimulation in VR. While the user is visually and acoustically immersed in VR, small quadcopters simulate bumblebees, arrows, and other objects hitting the user. The user wears a VR headset, mini-quadcopters, controlled by an optical marker tracking system, are used to provide tactile feedback.BibTeX
T. Dingler, R. Rzayev, V. Schwind, and N. Henze, “RSVP on the go - Implicit Reading Support on Smart Watches Through Eye Tracking,” in
Proceedings of the ACM International Symposium on Wearable Computers (ISWC), ACM, Ed., in Proceedings of the ACM International Symposium on Wearable Computers (ISWC). 2016, pp. 116–119. doi:
10.1145/2971763.2971794.
Abstract
While smartwatches have become common for mobile interaction, one of their main limitation is the limited screen size. To facilitate reading activities despite these limitations, reading with Rapid Serial Visual Presentation (RSVP) has been shown to be feasible. However, when text is presented in rapid sequence, single words are easily missed due to blinking or briefly glancing up from the screen. This gets worse the more the reader is engaged in a secondary task, such as walking. To give implicit control over the reading flow we combined an RSVP reading application on a smartwatch with a head-worn eye tracker. When the reading flow is briefly interrupted, the text presentation automatically pauses or backtracks. In a user study with 15 participants we show that using eye tracking in combination with RSVP increases users' comprehension compared to a touch-based UI to control the text presentation. We argue that eye tracking will be a valuable extension for future smartwatch interaction.BibTeX
A. Voit, T. Machulla, D. Weber, V. Schwind, S. Schneegaß, and N. Henze, “Exploring Notifications in Smart Home Environments,” in
Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI), ACM, Ed., in Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI). 2016, pp. 942–947. doi:
10.1145/2957265.2962661.
Abstract
Notifications are a core mechanism of current smart devices. They inform about a variety of events including messages, social network comments, and application updates. While users appreciate the awareness that notifications provide, notifications cause distraction, higher cognitive load, and task interruptions. With the increasing importance of smart environments, the number of sensors that could trigger notifications will increase dramatically. A flower with a moisture sensor, for example, could create a notification whenever the flower needs water. We assume that current notification mechanisms will not scale with the increasing number of notifications. We therefore explore notification mechanisms for smart homes. Notifications are shown on smartphones, on displays in the environment, next to the sending objects, or on the user's body. In an online survey, we compare the four locations in four scenarios. While different aspects influence the perceived suitability of each notification location, the smartphone generally is rated the best.BibTeX
L. Lischke, V. Schwind, K. Friedrich, A. Schmidt, and N. Henze, “MAGIC-Pointing on Large High-Resolution Displays,” in
Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA), J. Kaye, A. Druin, C. Lampe, D. Morris, and J. P. Hourcade, Eds., in Proceedings of the CHI Conference on Human Factors in Computing Systems-Extended Abstracts (CHI-EA). ACM, 2016, pp. 1706–1712. doi:
10.1145/2851581.2892479.
Abstract
Display space in offices constantly increased in the last decades. We believe that this trend will continue and ultimately result in the use of wall-sized displays in the future office. One of the most challenging tasks while interacting with large high-resolution displays is target acquisition. The most important challenges reported in previous work are the long distances that need to be traveled with the pointer while still enabling precise selection as well as seeking for the pointer on the large display. In this paper, we investigate if MAGIC-Pointing, controlling the pointer through eye gaze, can help overcome both challenges. We implemented MAGIC-Pointing for a 2.85m x 1.13m large display. Using this system we conducted a target selection study. The results show that using MAGIC-Pointing for selecting targets on wall-sized displays decreases the task completion time significantly and it also decreases the users' task load. We therefore argue that MAGIC-Pointing can help to make interaction with wall-sized displays usable.BibTeX
V. Schwind and S. Jäger, “The Uncanny Valley and the Importance of Eye Contact,” in Mensch und Computer 2015 - Tagungsband, S. Diefenbach and N. H. & M. Pielot, Eds., in Mensch und Computer 2015 - Tagungsband. De Gruyter Oldenbourg, 2015, pp. 153–162.
Abstract
The Uncanny Valley hypothesis describes the negative emotional response of human observers that is evoked by artificial figures or prostheses with a human-like appearance. Many studies have pointed out the meaning of facial features, but did not further investigate the importance of eye contact and its role in decision making about artificial faces. In this study we recorded the number and duration of fixations of participants (N = 53) and recorded gaze movements and fixations on different areas of interest, as well as the response time when a participant judged a face as non-human. In a subsequent questionnaire, we grasped subjective ratings. In our analysis we found correlations between the likeability and the duration of eye fixations on the eye area. The gaze sequences show that artificial faces were visually processed similar to the real ones and mostly remained not assessed as artificial as long as the eye regions were not considered.BibTeX