Tactile Communication in Extreme Contexts: Exploring the Design Space Through Kiteboarding

. This paper uses kiteboarding as an experimental platform to find ways in which technologies could support communication needs in mentally and physically demanding contexts. A kite control bar with embedded sensors and actuators communicates instructions through voice or tactile cues to explore facilitating communication for control guidance. Tactile cues were shown to be productive in changing behavior. Voice, however, communicated planning models and directional guidance better than tactile cues. Still, voice may negatively impact experience. The experiments highlight the need for better ways for communication tools to support mental models.


Kiteboarding
In this section kiteboarding will be described; this emerged from expert and instructor interviews, observations and the authors' combined experience of more than 20 years in the sport. Kites are usually between 6m 2 and 17m 2 depending on the wind speed and rider weight. It is controlled through lines between 10-30m long (see Figure 1). Steering left is done by pulling the left side of the control bar, same for right. Some novices think turning the bar like a steering wheel will cause the kite to turnthis is not the case, tension on the steering line must be applied. Power is controlled by either pushing the whole bar away, which depowers the kite, or by pulling the bar towards the body, which powers up the kite. Board control including edging and balance are also crucial skills for kitesurfing.

Communication
While kite instructors might demonstrate their skills next to a student, interviews with 5 kiteboarding instructors revealed the preferred method of communication between students and instructors is through voice. However, during training and riding, the athletes are quickly separated. In these situations, hand signals are used to communicate commands and requests. Radio communication helmets are used by some kiteschools, however, they are expensive and delicate, and students often become overwhelmed with the water noise and poor sound. Instructors reported to us that students stop responding to voice commands during high levels of excitement. Perhaps the overloaded audio channel leads to confusion, change blindness, or that instructions blend in with environmental noise.

Progression
There are different ways a rider typically progresses in training depending on their skill level. Kitesurfing is very difficult to learn without taking a beginner's course [16], while an intermediate rider or expert typically learns through watching others, viewing instructional videos, and through much practice. For feedback, riders often return to shore for critique from others. Kite control is a crucial component for the progression of riders at all levels. Training could involve moving the kite to a position in the sky requested by the instructor, holding it steady at one position, or moving it through various positions in a sequence. The kite position is typically explained according to the positions of a clock face as shown in Figure 2. The entire face is referred to as the wind window as shown in Figure 3. Holding at 12 o'clock is directly overhead, while a kite positioned at 9 or 3 is nearly touching the ground.

Technology for Kiteboarding
Already within the sport of kiteboarding there are various technologies used including a personal video documentation camera (e.g. GoPro) to capture video, GPS or a smartphone to map a route. Other products promise to track and replay kite board movements in virtual replay [33]. This enables riders to evaluate height, length and speed of jumps etc. We recognized the challenges and needs for communicating guidance and suggestions for skill improvement, yet we find few examples of technology to support this. Furthermore, there has not been sufficient attention given to the physical equipment including control bar, board, harness, which provide opportunities for contact to the body and capacity for holding embedded technology.

Related Work
Work that has examined technologies supporting physical activities include research focused on technologies in sports and physical activities, technologies supporting navigation, and research exploring the limits of human perception in various modalities. Of particular interest is the research that involves embedding communication technologies into activities to better understand the context and to explore the performance and responses to the technology-based interventions.

Technologies in Sports and Physical Activity
Research has gone into exploring interactive technologies and how it can support physical activities by improving performance, experience and motivating physical activity. Much research has focused on exertion interfaces which examine technologies to foster exercise interactions [25], enhancing social play through familiar activities, using elements from computer games to motivate physical activity and new forms of expression and performance. Various examples of research explore how technology can improve the performance and technique of athletes [2,8,34], the experience of performing the exertion activities [26,28], and designing for such interac-  tions [12,29]. Research has distinguished between interactive sport-training games and integrated systems [19]. These games refer to systems that train activities that could translate back into the sport activity. These systems include technologies that embed directly into the activity and seek to improve performance and training. Training systems have been studied using tactile instructions to give athletes cues on navigation, timing and posture in different sports [8,35].

Technologies for Navigation
We examine research involving audio, tactile, and/or haptic means of delivering feedback, as these modalities might free the rider's visual sense that is generally focused on their equipment, spotting and avoiding obstacles. Navigational cues are delivered using simple, symbolic, ambient and spatial representations of directions. Various examples of research using audio technologies for navigation include simple directional voice commands for visually impaired [22], and spatial sounds presenting general directions [15,41]. Research projects have focused on aiding navigation and orientation through the use of tactile feedback as described in [6] including tactile feedback delivered to various parts of the body through wearable and mobile devices. Research explored smartphones [27], tactile belts [14,38] and shoes [21] to give simple directional cues, as well as symbolic cues for communicating direction [39] and ambient guidance [9,37]. In terms of haptic research, simple directional signals have been explored for guiding pedestrians [24] and navigation through symbolic haptic means in the form of shape and weight shifting technologies [13]. More direct forms of guidance are also relevant including examples of actually pulling the user's body [1,20] or providing guidance through shape-changing mobile devices that simulate a physical guide or handrail [17].

Human Factors in Multimodal Interfaces
The capacity and limits of human perception and attention in a high cognitive workload environment have been explored through research focused on human modalities, attention and processing resources, as well as insights about mapping the modality to the appropriate information. Multiple Resource Theory proposes that humans can process information from multiple sensory channels simultaneously, and that in some situations it can be more effective to divide attention between visual and auditory channels rather than loading a single sensory channel [40]. Changes in modalities can be easily overlooked due to lack of attention. [32]. Change blindness can occur in vision, in the audition (change deafness), or even within the tactile modality [10]. There are also attempts at providing overall guidance in selection of communication modality and task [7].

Research Problem
While there is much work focused on technologies to support various activities, there has been less focus on communication in the demanding context of extreme sports. With kiteboarding as a platform for exploration of high demands on the user, we examine the use of voice and tactile communication technology to support the needs of kiteboarders. How does choice of modality (voice or tactile) impact the effectiveness of control guidance? Are there any interactions or complimentary effects of using voice and tactile feedback together? Aside from objective measurements of performance, are there subjective measures that reveal any preferences and experiential differences?

Design and implementation
When a student is learning how to sail a boat, an instructor is sitting beside the student and is able to assert direct control and manipulate the same controls as the student. This is known as hand-over-hand training through a shared object. However in the context of kiteboarding this is not possible due to the separation of instructor and student/rider. This is problematic for both the beginner but also for the intermediate or expert rider who wants direct feedback on his technique. The overall vision with our design has been to capture the benefits of hand-over-hand instructions. We present the current prototype that aims to capture this vision followed by details about the issues we explored during the design process.

Current Implementation
By adopting haptic feedback technologies in a "shared object" model as illustrated in the inTouch system [3], the design vision is a control bar that connects two people through vibrotactile feedback. In Figure 4A, both the instructor and the student are holding a control bar, however the instructor bar is not attached to a kite. When the instructor moves the bar to a new position the student bar will vibrate in the corresponding side of the bar, indicating the direction to move, and thereby which side to pull as shown in Figure 4B. This mapping is similar to the metaphor of "tapping on one's shoulder" [31]. When the student bar is at the same position as the instructor bar, the directional vibrotactile signal will stop and a short tactile confirmation cue could be given. For testing purposes, only the student bar is part of the current implementation. It consists of the tactile kite control bar and a Java-based test control system on a PC for facilitating evaluations and logging of performance data. The tactile bar prototype is shown in Figure 5, and consists of (A) a waist harness for holding an Arduino Leonardo microcontroller inside a box (B), a kite control bar equipped with two large mass Eccentric Rotating Mass (ERM) vibrotactile actuators from a Playstation controller at each end of the bar (C and E), (D) a Sparkfun MMA7361 3-axis accelerometer and a 6mm ERM vibrotactile actuator in the middle of the bar. The bar is attached to the harness using a leash (G) ensuring that the kite cannot fly away accidentally. A signal wire along the leash (F) is attached making communication between the bar and micro controller possible. Based on accelerometer readings, the prototype is able to provide tactile feedback in the right and left hand side, or both, simultaneously. A smoothing algorithm provides stable accelerometer readings. Further, a small center-mounted vibrator can distribute stimuli all over the bar. The prototype can be used as a research platform to study teaching different kinds of feedback for a variety of kiting skills (like "Let go of the bar!") and even multi-kite coordination (like kiting in formation).

Design Exploration
The current prototype is a result of several iterations, where different technologies for feedback were considered. During the process the design has been evaluated by 5 instructors, 2 kiteboarding equipment designers from the CrazyFly kiteboarding company and various kiteboarders including the authors of this paper. The process has been centered around four factors: placement, number of actuators, intensity as identified in [11] and mapping of the system logic to the actuation events. Regarding intensity, two different types of vibrotactile actuators have been considered, the ERM type and the Linear Resonant Actuator (LRA) type. The ERM feedback is often bigger and more intense than the smaller, more precise and faster LRA feedback. Figure 6 illustrates different options regarding the placement and number of vibrotactile actuators. Adding small LRA vibrators inside the control bar ( Figure 6A) will cause the stimuli to resonate all over the bar, because of tight contact to the material. This challenge is also faced when mounting LRA's outside the bar ( Figure 6B). Isolating the actuators from the bar is possible with an insulating material such as foam ( Figure 6B), but when applying pressure, contact will again be present. This is problematic if trying to create an animated stimulus for example from the middle of the bar to the ends of the bar. The overall result of adding more actuators inside or outside the bar is that one can not tell the difference in where the stimulus is active. It reflects the phenomenon of "apparent location", which is when two separated vibrations are active at the same time on the skin, they can be perceived as one single vibration from somewhere in between the two [11]. That suggests an approach of choosing simple rather than complex and high positional fidelity when it comes to communicating tactile stimuli through the control bar. Mounting vibrotactile actuators directly on the lines ( Figure  6C) results in isolation from the bar. Mounting powerful large-mass ERM actuators at each end of the bar turned out to be very effective ( Figure 6D). With this setup it is easy to discern which side is vibrating, therefore this setup is part of the current design. Initial studies explored the options of mapping. Evaluations with instructors, kiteboarding equipment designers, and others showed that pulling in the side that is vibrating seemed the most intuitive. This is also similar to the metaphor of "tapping on one's shoulder" [31] or steering a motorcycle to the same side as the active tactile stimuli [30]. Some use the opposite analogy when steering a car, but here the driver must steer away from the cue felt [36]. In the case of driving, that approach seems optimal, because the driver would steer away from the road barrier.

Evaluation
The evaluation of the system is aimed at performance in kite control. In order to examine this, two study sessions were conducted. To avoid the learning effect, each session was conducted with a new set of 10 participants. In each session participants performed a kite control task involving three conditions providing different modes of guidance including tactile feedback only (tactile only), voice feedback only (voice only), and voice commands with tactile feedback (tactile + voice). The order presented to the subjects was balanced and randomized. In each of these the participant should steer the kite to 7 different positions. The sessions took place at the beach in order to provide a context that is as close to the real world scenario as possible. From the results of session I, opportunities for improvements were identified and implemented in session II and evaluated with a different set of participants.

Participants and Task
As mentioned, 10 participants took part in each session including 3 females and 17 males between the ages of 19 and 41 with an average age of 25.9 years. This included 3 novices and 17 proficient kiteboarders. Participants were briefed on the kite control task and were asked to fill out a consent form and a questionnaire before and after the study session. The participants completed a kite control task in each of the three conditions. The task is based on a typical kite control exercise. Participants must orient the control bar to the 12 o'clock position in order to begin the session. The participant is then guided through a series of 7 target positions mapped to positions on the clock face. Upon reaching each target position, one is required to hold the position for 5 seconds, after which they are guided to the next position in the task sequence. This sequence was provided to the participants, always beginning and ending with the 12 o'clock position. Half of the participants proceeded through the positions of 2, 10, 1, 12, 11, 2, and the other half proceeded in reverse order. This was balanced and randomized.

Study Session Protocol and Measures
The participants were asked to answer a demographic questionnaire. Participants were then asked to hold and orient the control bar to explore the tactile sensations in order to understand the mapping between the position of the bar and the vibrotactile sensations. When the participant was satisfied with their control abilities and were able to maintain an indicated position, the specific tasks for the study were introduced. After completing the tasks in all conditions, a second questionnaire about the experience was given. The participant performance and response to the tasks was measured through logged system events and self-reported feedback. Performance data was logged by the test control system running on a PC connected to the control bar microcontroller. The log includes events for when the target range is achieved, number of times the position deviates from the target range (drift), and when the position has been held steady within the target range for 5 seconds. During the study sessions the wind speed was tracked using an anemometer to ensure similar conditions across participants. Both before and after the session, participants were asked to answer forced selection questions and provide additional open ended feedback. Q1-2: Preferences. In order to explore which condition the participant preferred the most and least, they were asked to indicate by circling the words, "tactile only", "voice+tactile", or "voice only". Participants were permitted to provide supporting details about their choice if they desired. Q3: Reflection on Performance. In order to explore the self-reported belief about their performance, participants were asked to indicate in which of the three conditions they believed their performance was best. Again, they could choose tactile only, voice+tactile, or voice only and provide additional supporting details if desired. Q4: Improvements and Suggestions. In order to better understand the impressions of the system, the participants were asked to provide suggestions for improvements.

Session I: Controlling a Kite
Both sessions were conducted at the beach with a 3m 2 trainer kite on land with average windspeed of 8m/sec. The settings of the system in session I are illustrated in Figure 7. Here the clock face is shown, with 12 o'clock as an example target position. There are 4 layers on the figure including the clock positions, the vibration events, logged events, and angle in degrees. In the tactile only and tactile + voice conditions the tactile feedback was provided as indicated "vibration events". If the kite is steered away from the example target of 12 o'clock to 2 o'clock, full vibration will be active in the left side (L), and when passing 1 o'clock it transforms to weak vibration. When within 18° of the target, vibration stops. When giving voice commands in the tactile + voice and voice only conditions the logging remains the same. The facilitator calls out the position, e.g. "move the kite to 2 o'clock", then uses the voice commands equivalent to the vibration signals (strong/weak) to give guidance -"more left/right", "a little more left/right" and "hold it". The system alerts the facilitator when, and which command to verbalize.

Results of Session I
While all of the experienced kiteboarders completed the study session successfully, 3 novice participants were not able to complete the kite control task and their partial results have been withheld. Insights gathered from their behavior revealed a mismatch in the mental model for how to control a kite which we discuss in more detail later in the paper. While participants were able to quickly complete the control task in all conditions, the tactile only condition resulted in longer average times to achieve a target as shown in Figure 9 and moved outside the target range more as shown in Figure 10. The results are summarized in Table 1 and explained in more detail in the coming sections.
Self-reported Feedback from Session I. Q1-2 explored the participant preferences for the three conditions. Three participants preferred tactile only, while two preferred it the least. Two participants preferred voice + tactile and three indicated this as least preferred. Results from Q3 revealed that four participants believed they did best in the voice only condition. Two participants believed they did best in voice + tactile.
A review of the open-ended feedback suggests various motivations for their answers. Those who least preferred voice + tactile said that they only relied on one modality in the condition, as participant 3, 4 and 5 stated: P3: "Too much information." P4: "Not using the vibrations because I concentrate on the voice instructions." P5: "It was a bit confusing to both be listening and feeling simultaneously."

Session II: Vibrotactile refinements
From the results of session I, we recognized that participants responded differently to voice and tactile feedback not only in the amount of time to complete the tasks, but their steering behavior varied noticeably. When tactile cues alone were provided, participants hovered near the target range or oscillated between the boundaries of the range, yet when tactile was combined with voice and when guided by voice alone, participants did not engage in these wandering behaviors. We discuss this in more detail in the discussion section. Feedback suggested that additional guidance was needed. Therefore, we developed a tighter range of guidance signals as shown in Figure 8B and a vibrotactile confirmation cue to alert when the target position was reached, yet we maintained the previous target range. This was intended to guide the participant closer to the center of the target. The confirmation cue was delivered by the middle vibrotactile actuator as indicated in Figure 5D & 8C. The enhancements to the new settings were evaluated with new participants at the beach, following the same study protocol as in study session I.

Results of Session II
Analysis of variance (ANOVA) within subjects did not reveal differences to be significant. However independent samples t-test between session I and session II, suggest that the improvements made to the prototype resulted in improvements to the performance and these differences were not due to chance. Comparing sessions I & II, there were improvements across all conditions in session II. This includes the time to reach a target position in the tactile only condition for session I (M=17.86sec, SD=5.68sec) and session II (M=12.57sec, SD=2.57sec), t(15)=2.618, p=0.019. For the tactile + voice condition there is significant difference between session I (M=12.33sec, SD=1.57sec) and the improved settings in session II (M=10.36sec, SD=1.07sec), t(15)=3.084, p=0.008. Differences with "voice only" cues were not found to be significant p=0.555. With "tactile only" cues, kiters had statistically fewer average drifts from target (M=1.1, SD=0.69) with refined tactile cues compared to original thresholds (M=2.67, SD=1.63), t(15)=2.755, p=0.015. With "tactile + voice" cues, kiters had statistically fewer average drifts from target (M=0.56, SD=0.31) with refined tactile cues compared to original thresholds (M=0.94, SD=0.28), t(15)=2.572, p=0.021. Differences with "voice only" cues were not found to be significant p=0.974.  These results suggest that voice can be more effective than haptic feedback for planning tasks when it provides a model (such as "move the kite to 2 o'clock") for the user to test their actions against, and that the refinements to feedback with confirmation of target acquisition can lead to better performance. Self-reported Feedback from Session II. In session II the majority (7 out of 10) preferred the condition with tactile and voice combined, and none preferred voice feedback alone. Those who prefer tactile + voice, mention they like that voice gives a precise idea of where the target is, while the tactile feedback supports with precise information for final adjustments. The remaining 3 out of 10 preferred the tactile only condition claiming to notice a sense of control, quick response time and precise feedback. The least preferred condition (7 out of 10) was voice only. The open-ended feedback indicated that the 2 out of 10 participants that least preferred the tactile only condition, claimed they needed guidance on what position they were going to, and felt like they were exploring to find the right position. The 1 out of 10 that preferred tactile + voice least, indicated that they did not like the redundancy in the feedback and 7 out of 10 expressed claims that the voice feedback was too slow and imprecise. Every participant reported they did the best in the condition they preferred the most. The majority (7 out of 10) preferred tactile + voice, and none preferred the session with voice only. In contrast to the self-reported feedback in session II, the logged data shows that there were no statistically significant differences between the average time it took the participant to achieve the desired target in the tactile + voice condition compared to the tactile only condition. Surprisingly, in session I, the open-ended feedback indicated that participants did not like the additional feedback in the tactile + voice condition, however, it was the most preferred condition in session II. Various participants noted the confirmation signal from the center of the bar helped them understand when they achieved the target.

Discussion
Both sessions indicated that participants were able to perceive and understand the tactile cues. Surprisingly the results indicate that participants claim to prefer tactile combined with voice over voice alone, while objectively the performance is better with voice alone. Feedback from participants suggests that they appreciate the real time feedback and confirmation of achieving the target position. The negative sentiment for voice might be related to additional perceived effort in decoding the utterance, translating that into guidance, and then taking action. Perhaps voice activates social rules that compete with the intended experience. This deserves further exploration in future studies.
Even though results have been improved in session II, our observations suggests that in extreme sportsand likely in other extreme contextsusers may adopt a mental model that is poorly suited to facilitating their success in the activity, which can be difficult to change. We provide further explanation of these findings in this section and discuss the limitations of our work.

Improvements to Tactile Communication
Voice feedback resulted in better performance, compared to tactile cues alone, as participants quickly adopt the semantic map and move to the target position, and upon reaching it, they were told to stop. This is challenging for tactile feedback, as it is not well suited in communicating the angle in clock positions. Instead, simple guidance (left/right) has been provided, which is not always enough as some participants were stopping right at the acceptable boundary, or oscillating between the 2 edges of the target range. The reason for this was identified as 1) a missing confirmation, and 2) guidance that stops too soon. By adding a tactile confirmation cue and by moving the weak vibration inside the acceptable target range, guiding the participants away from the edge (Figure 8), improved the results. Even tough tactile feedback can communicate basic guidance, mental models of novices for how a kite is controlled can be mismatched, making guidance difficult.

Mental Models
In session I, 2 people were not able to complete the tasks. The main obstacle was typical for beginnersthey failed to adopt an effective mental model for how the kite responds to the control bar movements, leading to inability to maintain a stable position. In order to control the position of the kite, the rider shortens the line connected to one side of the kite in the desired direction of travel. Experienced kiters adopt a habit of holding the control bar parallel to the leading edge of the kite, so that pulling on one side of the bar maps to the direction the kite will move. Skilled riders on the water usually follow the same practice adjusting the orientation of the bar while riding. It is possible to control the kite without holding the bar like this, however, it becomes much more difficult to control and predict movements.

Limitations
Our work provides some initial steps towards developing technologies to support participants in kiteboarding, however, there are possible limitations to this work which are worthwhile to discuss. In terms of the prototype design and evaluation, there may be concerns about the nature of the evaluation task and its relation to the real world experience of kiteboarding. While the choice of logged data for the evaluations has been useful in highlighting the differences in response to voice and tactile feedbackwe would like to expand to more measures including tension of the lines, position of the body, orientation of the board, and bar to identify bad habits.

Conclusion and Future Work
Designers have explored technologies to support physical activities, however, there is less research focused on technologies supporting communication in physically and mentally demanding contexts such as extreme sports. We explore kiteboarding as an experimental platform to find ways in which technologies can support communication needs. Kiteboarding is a complex dynamic activity with intense demands of control and diminished opportunities for communication. We developed a kite control bar with embedded sensors and actuators to communicate instructions through voice or tactile cues to explore how technology can facilitate communication for control guidance. Evaluations suggest that familiar voice communication is effective, yet may negatively impact the experience. We show that tactile cues can also be effective at changing behavior. However, tactile feedback was not as flexible as voice for presenting a mental model. Our work demonstrates the value of voice commands providing quick, high level semantic models. We found that tactile cues can provide effective feedback that might be preferred in contexts that place extreme demands on the user. Evaluations reveal how small changes in tactile feedback can yield significant improvements. This work is a celebration of the capacities of human performance and signals opportunities for computer controlled feedback across modalities. There is a need for continued exploration of cyber-physical systems to examine signals for appropriateness and capacity to support interactions. This paper calls for building an understanding of how to explore scenarios and contexts using multimodal feedback systems. We encourage the continued investigation into exploring languages of various modalities and interactions among them. This seems to be an area ripe for exploration in the challenge of designing for extreme sports or other contexts in which people are pushed to their limits.
Aside from the existing goals of kiteboarders, it would be interesting to explore other forms of play that can be supported through the use of digital augmentations along the lines of work involving augmentations to expand the experience of a game platform [18] and new forms of social expression through an exertion interfaces approach [28], perhaps exploring opportunities for social interaction among participants and even spectators [23].