Evaluation of Distance-Aware Bimanual Manipulation Techniques for Large High-Resolution Displays

. In this paper we present the approach of interaction scaling. It assists users during their current tasks by adjusting interactivity depending on the user ’ s distance to large high-resolution displays. The mapping method of interaction scaling combines the calculation of a distance-adjusted mapping factor with a manual/automatic change of precision levels. In our user study we evaluated how different accuracies, user preferences and physical navigation affect the user performance of distance-aware manipulation techniques. We used symmetric/ asymmetric bimanual manipulation techniques that were evaluated with interaction scaling and a direct mapping approach. Further, we differentiated between coarse-grained and ﬁ ne-grained accuracy of manipulation tasks. The study identi ﬁ ed that interaction scaling improves user performance for very precise manipulation tasks. The participants were able to manipulate objects more accurately with asymmetric technique than with symmetric technique. Most participants preferred a manual switching; however, the tasks could be solved equally well with automatic switching by half of them.


Introduction
Nowadays, large high-resolution displays (LHRDs) are used in wide application areas like product engineering, geospatial imaging or scientific visualization. They combine a large physical display area with high pixel density. In addition, LHRDs change the way users perceive and interact with information as is known from small displays [1]. This has resulted in modifications of user behavior and interaction possibility. We must consider these aspects at information visualization and application of interaction techniques in LHRD environments.
For instance, physical navigation provides a natural behavior for user interaction. This means users step forwards or backwards in front of a large display to perceive detailed information or the global context of information. Similarly to visual perception tasks, different interaction tasks are performed at close-up range or at distant. These tasks require different precision and sensitivity of user input.
On the one hand, there are tasks which require fast and imprecise user input such as object movements across large distances with less effort. On the other hand, some tasks need precise and slow user input like exact object positioning despite of natural hand tremor. To overcome the human precision limit a control-display gain or mapping factor is used. For instance, a low mapping factor generates slower pointer motions in display space than the actual user input in motor space, and conversely with a high mapping factor. In general, different mapping factors were used to map the user input to coarse-grained or fine-grained virtual interaction. Therefore, we need methods to switch between these precision levels during user interaction.
Our approach of interaction scaling assists the user during his current activity by adjusting the precision of interaction depending on the user's distance to the display. This work is an extension of interaction scaling presented in [22], in which we investigate how the user benefits from a distance-aware manipulation technique. We improved the calculation of a distance-adjusted mapping factor with a switching method of precision level. The automatic change of precision levels uses an implicit granularity control via task-based and distance-adapted mapping factor. In contrast, the manual switching of precision levels uses an explicit switching between distance-adapted mapping factor and one-to-one mapping. We applied the extended interaction scaling approach to bimanual manipulation techniques; a symmetric technique which allows performing the manipulation tasks simultaneously, while the asymmetric technique differentiates between manipulation tasks. The switching methods and manipulation techniques use different interaction metaphors. For instance the manual switching and symmetric technique seems to be easy to use for novice user, whereas the others more suitable for experts. In our user study we evaluated how different interaction accuracies, user preferences and behavior of physical navigation affect the user performance of distance-aware manipulation techniques.
The contributions of our work are applying interaction scaling with task-based automatic switching of precision levels, evaluation of distance-aware manipulation techniques, and discussion of findings during distance-aware manipulation.

Related Work
Previous studies showed that LHRDs impact positively user performance on visualization and manipulation tasks [3,10] and influence the user's information perception and their interaction [2,26].
Hence, large displays require interaction techniques that support different precision of user input. The challenge is to switch between fast and precise interaction tasks that require different speed and accuracy of user input. There are several approaches that vary the mapping factor during interaction, like target-oriented techniques [6,14], velocity-based techniques [8,9,11,13,19,20,23], and manual-switching methods [12,23,28]. However, using target-oriented methods we need knowledge about the virtual environment. Further, velocity-based approaches require device-specific calculations because of different input modalities (e.g., touchpad, laser pointer, mouse).
Esakia et al. [11] used a velocity-based touchpad in combination with an explicit switching between multiple acceleration curves by varying the number of fingers to control the cursor pointer. Thus, a larger range of dynamic mapping factors is supported.
However, using the user's distance in front of a LHRD allows to adjust the mapping factor dynamically and device-independent. Such distance-aware techniques adjust the precision of interaction according to the user's current position [7,18,24,25,27]. For instance, the interactive public ambient display [27] utilizes the user's distance to control public and personal information presentation. In [25] the user distance controls a virtual light position that affects the user's shadow cast and the positioning of virtual tools during interaction. In multiscale interaction [24] the multiscale cursor is changed based on the user's position relative to the display and re-scales the displayed data interactively.
Furthermore, LHRDs attract users to move within the physical space in front of the display (physical navigation). Related work showed that using physical navigation increases user performance on navigation and visualization tasks [1-3, 5, 18]. Jakobsen et al. [18] investigated proxemics for visualization tasks. For instance, the physical sense to abstract data was increased by using proxemics and the user's effort was reduced with proxemic-based zooming and aggregation. In [4] the authors discussed the concept of proxemic interactions in a smart living environment, for example to control a media player by implicit or explicit spatial user movements.
In previous work the user's distance was considered for selection and navigation tasks [3,18,20,24]. We assume that physical navigation also increases user performance on manipulation tasks. Our objective is to find a distance-aware adjustment of interaction precision by using physical navigation in LHRD environments. Furthermore, the distance-aware adjustment should be almost device-independent.

Distance-Aware Interaction
As mentioned above this paper describes a further development of our interaction scaling approach (see [22]). We implemented a bimanual symmetric and asymmetric interaction technique that enables users to manipulate virtual content with the suitable accuracy in large display environments. In a preliminary study [22] we evaluated these techniques with a distance-adapted mapping method. We used continuous and discrete distance-adjusted mapping factors with pure relative mapping and compared it with an absolute (1:1) mapping. The results indicated that a distance-adapted continuous mapping factor is more suitable than discrete mapping factors. Furthermore, we investigated an interaction accuracy of 5-3 mm. Subjects were able to manipulate objects easily with absolute mapping, because the natural hand tremor was manageable. Moreover, the drift effect of relative mapping was difficult to compensate by users. We observed that interaction scaling had a positive impact on user performance (e.g., less object selection) but we did not find statistical significant results.
Based on these results we integrated a switching method to interaction scaling. We implemented a task-based automatic switching to reduce the drift effect. Some subjects requested fast object manipulation at close-up range of the display, thus, we implemented a manual switching method to support user preferences. Furthermore, we adjusted the continuous mapping functions to support user interaction with high accuracy.

2D Manipulation Techniques
In this section we shortly describe the used bimanual 2D manipulation techniques which support the fundamental manipulation tasks of selection, positioning, scaling, and rotation. We utilize an indirect interaction technique to interact from various distances. The manipulation techniques use a ray-casting technique to determine the virtual cursors of the corresponding interaction devices. We apply only the position data of the input devices (3-DOF) to calculate the virtual cursor position by using an orthographic projection. Thus, the hand tremor is invisible at the corresponding virtual cursors. The object manipulation is performed in a similar way to common two finger multitouch gestures.
The asymmetric manipulation technique differentiates between manipulation tasks which are performed with the dominant hand (i.e., selection and positioning) and the non-dominant hand (i.e., rotation and scaling). At selection the dominant hand's cursor is moved onto the virtual object and the selection button is pressed (manipulation start). The selected object is translated according the movements of the dominant hand. If the selection button is released the object is placed at the current cursor's position (manipulation end). To scale or rotate an object the user switches the interaction mode by pressing the manipulation button with the non-dominant hand. During the scale-rotate mode (SR-mode) the dominant hand's cursor is fixed and the object's positioning is disabled. The object can be scaled down or up by constricting or expanding the distance between the hands' cursors. To rotate the object the user changes the angle between both cursors. If the user releases the manipulation button, the SR-mode is disabled and the positioning mode is enabled again.
Guiard's framework [15] suggests that users perform fine-grained interaction tasks with their dominant hand, whereas they use their non-dominant hand for coarse-grained interaction tasks. Here, the dominant hand is used for the selection task which uses an absolute mapping (1:1) and requires precise motions. By contrast, the rotation and scaling task are supported by a distance-adjusted mapping factor why the non-dominant hand is used for it.
The symmetric manipulation technique uses an additional midpoint cursor between both hands' cursors to interact with virtual objects. For the selection task the midpoint cursor is moved onto the virtual object and the selection button is pressed with the dominant hand (manipulation start). The hands' cursors must be moved simultaneously according the desired direction to translate the selected object. The object can be scaled down or up by constricting or expanding the distance between the hands' cursors. At rotation task the user's hands are moved like a steering wheel. The selected object is placed on the current position of the mid-point cursor when the selection button is released. Using the symmetric technique the manipulation tasks are performed simultaneously.

Interaction Scaling
Interaction scaling defines the distance-adjusted precision of user input based on the user's physical navigation in front of a LHRD. With a distance-adjusted mapping factor we calculate the virtual motions (display pointer) based on the user's physical motions (input device) and the user's current display distance.
Therefore, we use the dimensions of the physical interaction space in front of the display wall and define special interaction ranges. In addition, these interaction ranges depend on the current LHRD configuration (e.g., dimension of display surface, pixel resolution, display alignment). We calculate different viewing distances to define the interaction ranges, when a fine-grained or coarse-grained interaction is appropriated.
The visual acuity is used to determine the maximum viewing distance d where a user is able to perceive objects with object size h separately [29]. By using the formula of visual angle θ (Eq. 1) we can calculate the viewing distance d pixel where a user with normal vision is able to recognize individual pixels on a LHRD with given pixel pitch pp (see Eq. 2, θ = 1/60 • ). Further, we calculate the viewing distance d block where pixels are merged by doubling of pixel pitch. At this distance fine detailed information are not anymore noticeable. We determine the viewing distance d wall where a user is able to see the entire display wall s (see Eq. 3).
In order to calculate the distance-adapted mapping factor we use the following relative mapping functions.
Linear Mapping Function: A continuous linear factor is applied to the physical motions. The factor (mf) grows linearly with increasing distance to the display. At close-up and middle range a small mapping factor (mf < 1) is calculated. Thus, larger physical movements are needed to perform small virtual movements. At distant range a greater mapping factor is used (mf > 1) to support small physical movements which result in larger virtual movements.
Exponential Mapping Function: The mapping factor decreases closer to the display. At close-up range a mapping factor less one (mf < 1) is calculated which increases slowly. Thus, a very precise interaction is supported. At middle distance the mapping factor grows faster, accordingly smaller physical movements are needed to achieve the same virtual movements with increasing distance. At distant range a constant mapping factor (mf ≥ 1) is used to provide coarse interaction.
We use two methods to switch the precision levels between fast, imprecise interaction (absolute mapping) and slow, distance-adjusted precise interaction (relative mapping).
Manual Switching: The user can switch between absolute mapping and relative mapping (e.g., pressing an additional button). Here, the user is able to deactivate the distance-adjusted precision at each distance.
Automatic Switching: The selection task uses an absolute mapping and the other tasks are performed with a relative mapping, i.e., a task-based method is used. Additionally, the relative mapping is preserved if the user is within the close-up range at manipulation end (current distance ≤ close-up range). If no object is selected and the user leaves the close-up range then the relative mapping is changed to absolute mapping automatically (current distance > close-up range).
Both methods use animated cursor positions to switch from relative to absolute mapping. The automatic switching requires no additional user input in contrast to the manual switching. This is useful in environments where limited user input is available.

User Study
We conducted an experiment to evaluate our interaction scaling techniques. Our goal was to determine how different accuracies and preferences affect user performance. We assume the user performance is increased by using interaction scaling (IS). We implemented a 2D puzzle solver application to compare the user performance with and without interaction scaling. The experiment task was inspired by a children's toy (the shape sorting cube).
The accuracy represents the tolerance range when an object is manipulated correctly to fit into its target. The experiment differentiated between big targets, objects with coarse-grained accuracy (tolerance range ± 2.5 mm), and small targets, objects with fine-grained accuracy (tolerance range ± 1 mm). Therefore, we supported an interaction at distant and at close-up range. In the experiment we had three big targets (square, circle, star) and five small targets (square, circle, star, two triangles). The targets with coarse accuracy were displayed with a thicker silhouette.
Task. The participants were asked to manipulate 2D objects to fit into their corresponding 2D targets with respect to size, position, and orientation. They had to sort eight objects with the same color like the target container (see Fig. 1). The application will only verify the object adjustment if the user drops down the manipulated object onto the desired target by releasing the selection button.
In the experiment we tested the following hypotheses for both manipulation techniques: • H1: Manipulating objects is more efficient and effectively with interaction scaling than without interaction scaling. • H2: There is a difference between big and small targets according to number of attempts and error rate. Furthermore, the error rate is lower with interaction scaling than without interaction scaling for small targets. The error rate of big targets will not be affected by interaction scaling. • H3: Using interaction scaling reduces the physiological effort.

Apparatus
The display wall is a 6 × 4 tiled display. Each tile has a resolution of 1920 × 1200 pixels (DELL 2709W), resulting in a total resolution of 11520 × 4800 pixels (55 million pixels). We used a 12-camera infrared tracking system from NaturalPoint (OptiTrack 1 ) and the cameras are mounted on a suspended traverse system. The tracking volume in front of the display wall is approximately 3.8 m wide, 3.0 m deep and 3.0 m tall. The user holds a tracked Nintendo Wii Remote Controller (Wiimote) in each hand to interact with the application. We used the Wiimote's [A] button as selection button on the dominant hand (both techniques) and as manipulation button on the non-dominant hand (asymmetric technique). The Wiimote's [B] button was used as switch button on the dominant hand (both techniques). Furthermore, the user wears a tracked baseball cap to determine his current user-display distance. The test application was implemented by using the VRUI toolkit [21].
We use a tiled display wall with bezels, thus, the test application is configured in such a way that no virtual content is occluded by the bezels (see Fig. 1). The virtual objects' size was limited from 0.5 to 25.4 cm. The size limits were tested experimentally and the objects were visible from any distance. At application start the objects were generated randomly with respect to size and position without overlapping each other.

Design and Procedure
In our user study we evaluated the asymmetric technique (asymT) and the symmetric technique (symT) with two distance-adjusted mapping functions and two methods of switching the precision level as described in Sect. 3.2. Thus, we combined the linear mapping function (lmf) and the exponential mapping function (emf) with the automatic (auto) and the manual (manu) switching method.
These four interaction scaling conditions were compared with a static mapping function without a cursor switching method (smf), see Table 1. This condition used one-to-one mapping of user motion to cursor motion (absolute mapping). In our preliminary studies this baseline condition performed well. The used mapping functions are illustrated in Fig. 2. The linear and exponential mapping functions generated similar mapping factors. However, the exponential mapping function supports more precise interaction with increasing distance than the linear function.
We calculated the viewing distances to determine the close-up range (d pixel : 1.03 m) and the distant range (d block : 2.06 m) for exponential mapping function based on our LHRD setup (pixel pitch 0.3 mm). In our setup the viewing distance d wall (2.2 m) is similar to d block , thus, we only used d block . For automatic switching the close-up range was defined at 115 cm approximated to d pixel with an additional tolerance value to compensate users' head tremor.
The experiment was performed by each subject with both manipulation techniques to obtain sufficient experimental data. The techniques use different interaction metaphors why only one technique was performed by the user on one experimental day. Therefore, the user's time exposure was limited to 45 min per experimental day. Each subject performed the task with all experimental conditions (within-subject design 2 ). The subject has been informed which condition was activated. The condition smf was performed as first and last trial. The switching method was performed with both distance-adjusted mapping functions, afterwards the switching method was changed to avoid mental overload (example presentation order: smf, auto-lmf, auto-emf, manu-lmf, manu-emf, smf). The experimental condition was counterbalanced between the subjects. The experiment consist of two partsa training phase and the test scenarios (trials). At the beginning the subjects performed a 10 min tutorial 3 to practice the interaction technique with the automatic and manual switching method. Afterwards the trail started. When the subject selected the first object the application timer was started. The subject manipulated the objects according to the current experimental interaction technique. The trial was finished if the last object was sorted correctly. The application timer was stopped and all data was written to a csv-file.
For each switching method the subject filled out the NASA task load index [16], a questionnaire to determine his physiological effort. In addition, the subject filled out a questionnaire reporting demographic information (before the study) and subjective data on his preferences (after the study). Furthermore, we captured the participant's 3D position to gain an insight into physical navigation.
Participants. The experiment had 24 voluntary participants (2 females and 22 males), whereas two male subjects could not finish the tasks, thus, they have been excluded from the data set. The participants were college students (9 subjects) or staff members (13 subjects) from the university. The ages of the participants ranged from 22 to 42 years with an average age of 30 years. One subject was a left-hander and all participants reported normal vision or corrected-to-normal vision (45 %). 7 participants reported experience with interaction in LHRD environments; the remaining 15 participants reported no experience.

Results
This section reports the results of the experiment. We did not remove outliers from the data set to represent different user types. Hence, the precondition of normal distributed data was violated in some cases (non-parametric tests applied). In order to analyze the user performance we ran a repeated-measured ANOVA (Greenhouse-Geisser Fig. 2. Illustration of the used mapping functions within the experiment. 3 The tutorial used the same task, whereby the tolerance range of big/small targets was increased to ± 3.8 mm and ± 2 mm, respectively. corrected) or a Friedman ANOVA on ranks on the dependent variables. The post hoc pairwise comparison (samples t-test or Dunn's method) was performed with Bonferroni correction. User performance is usually assessed by efficiency, effectiveness, physiological effort, and satisfaction [17]. In our study the efficiency and effectiveness were determined by measured manipulation time, number of attempts, and error rate. The physiological effort was determined by number of button events as the user's motor effort and his subjective workload rating. The workload was determined as Raw TLX [16] without weighting scales. The NASA task load index (TLX) uses six subscales (i.e., mental/physical/temporal demand, performance, effort, frustration) to determine the total workload. The overall TLX workload represented the satisfaction. We used z-score transformation to combine the different metrics. Figure 3 shows the mean z-scores of efficiency/effectiveness, physiological effort, and overall user performance.
Due to the participant's expectation on first trial and his training curve on last trial, we combined both trials of condition smf by arithmetic averaging of measured values.
We have analyzed the efficiency/effectiveness values with respect to the different accuracies of target type. The efficiency/effectiveness of small targets differed significantly (χ 2 (4) asymT = 14.10 p = 0.007 and χ 2 (4) symT = 14.66 p = 0.005) between smf and manu (p < 0.05) at both techniques. We found no differences for big targets.
The used parameters of efficiency/effectiveness were analyzed in detail. We measured the total manipulation time in seconds per object, i.e., the times were added up during the object manipulation. In order to sort in an object the required attempts were counted for each object according to the target parameters (position, size, orientation). We calculated the relative error rate of the corresponding target by the average sum of failed attempts and divided by the total number of attempts. That means, the less attempts, the lower the target's error rate.
Manipulation Time. On average, the subjects needed less time to manipulate the big targets than the small targets, as expected (see Fig. 4). For big targets we found a significant difference between the conditions (F(4,84) asymT,big = 2.51 p = 0.048, χ 2 (4) asymT,big = 9.64 p = 0.047). However, a pairwise comparison could not show which conditions are differed. For small targets we only found a significant difference at symT (χ 2 (4) symT,small = 16.2 p = 0.003) between smf and the conditions of manu (p < 0.05).
Number of Attempts. The subjects needed less attempts for the big targets than for the small targets. The number of attempts differed significantly between condition and small targets (χ 2 (4) asymT,small = 27.44 and χ 2 (4) symT,small = 19.02 p ≤ 0.001) and not significantly between condition and big targets. A pairwise comparison identified the differences of small targets between the condition smf and both conditions of manu (asymT and symT), also between smf and auto-lmf (asymT p < 0.005) or auto-emf (symT p < 0.05).
Error Rate. The lowest error rate was achieved with manual switching for big and small targets (see Fig. 4). As expected, the big targets achieved a low error rate with the static mapping function, whereas the small targets generated a higher error rate. For each condition the error rate differed very significantly between big and small targets (all p < 0.005 at asymT and all p < 0.001 at symT with Wilcoxon signed rank test). Furthermore, the error rate of small targets differed significantly between smf and manu at both techniques (p = 0.027 at asymT and p = 0.01 at symT) and between smf and auto-lmf at asymT (p = 0.002).
H2. We partially confirmed H2. By means of big/small targets with different accuracy we could show that accuracy affects the user performance. We found a difference according to required attempts between the targets types, whereby the big targets were solved faster. According to error rate the big targets were not affected by interaction scaling as expected. On the contrary, the error rate of small targets was affected positively by manual switching with symmetric technique. For asymmetric technique the error rate of small targets was affected positively by interaction scaling with manual switching and automatic switching with linear mapping factor. Since the exponential mapping factor decreases fast by stepping forwards, sometimes clutching was required for scaling/rotation tasks.
We found interesting results according to the maximum number of attempts (≤ 2) and low error rate (≤ 20 %), see Table 2. In total, at asymT the subjects solved 60 % of the small targets with a maximum of two attempts with interaction scaling in comparison to 45 % of small targets without interaction scaling. On the contrary, at symT the participants solved more than 52 % of the small targets with one or two attempts by using interaction scaling and 49 % without interaction scaling. At symT only about 30 % of the small targets were solved with low error rate with/without interaction scaling, excepting manu-emf. In particular, the participants had problems to solve the small rotated square 4 . At asymT over 38 % of the small targets were solved with low error rate using interaction scaling, while only 26 % of them were solved without interaction scaling. One exception was condition manu-emf where half of the small targets were solved with low error rate. H1. We partially confirmed H1. Our findings have shown participants required shorter manipulation times, less attempts and less error rate for object manipulation with both distance-adjusted mapping functions combined with the manual switching method of precision levels. In addition, the subjects performed the tasks more efficiently with automatic switching and linear mapping factor at asymmetric technique. Especially the required high accuracy of ± 1 mm benefited from interaction scaling. Nevertheless, the coarse accuracy of ± 2.5 mm supported good performance with and without interaction scaling.

Physiological Effort
The motor effort (i.e., movements of fingers and arms) differed between the manipulation techniques due to the used interaction metaphors. For symmetric technique the number of manipulation operations was equal to the number of object selections due to the simultaneous tasks. However, for asymmetric technique the number of manipulation operations were calculated from the count of object selections and the count of SR-mode activation. The total button events were counted for each trial (number of pressing switch button, select button, and manipulation button).
Workload. In general, the participants assessed that the overall workload of both manipulation techniques is roughly the same. However, the workload of auto was evaluated higher than manu. The frustration and effort were rated highest by smf. The participants reported a higher temporal/physical demand and frustration for symT, in particular with auto. Due to the simultaneous manipulation tasks of symT more physical demand was required and increased the frustration level (e.g., readjustment of scaling during positioning). In contrast, the mental demand, performance and effort were rated higher at asymT. Here, the switching of interaction mode required additional mental demand and effort, thus, reduced the frustration level (e.g., positioning is preserved during scaling). From the participants' point of view, the manual switching of precision levels required no additional physical/mental demand or effort. Based on the workload results the subjective workload rating was calculated by using the scales of physical demand, mental demand, effort, and frustration.
H3. In general, the physiological effort was higher at condition smf compared with auto or manu (see Fig. 3). However, a significant effect was only found for asymT (F(4,84) asymT = 3.98 p = 0.005) between smf and manu (p < 0.01) and auto-lmf (p < 0.05) respectively. Thus, H3 was only confirmed by asymmetric technique.
The workload of manual switching was less appraised by the subjects than for automatic or without switching. However, the number of button events was similar for automatic and without switching. Furthermore, we could not find a significant difference of physiological effort between the manipulation techniques. We assume that the higher number of button events do not affect the physiological effort at asymmetric technique.

User Performance and Preferences
User Performance. There was a significant effect of condition on user performance (χ 2 (4) asymT = 19.82, χ 2 (4) symT = 20.18 both p < 0.001) between smf and manu (see Fig. 3). At symT there was an additional significant difference between smf and autoemf (all p < 0.05). We found no effect of manipulation technique and switching method. In general, the user performance was better with manu than with auto at asymT. This difference is due to the poor performance results of efficiency/effectiveness and the increased physiological effort on auto. At symT also auto obtained averagely poor performance results of efficiency/effectiveness than manu. However, the performance values of physiological effort are roughly the same at symT.
Preferences. In general, the manual switching was preferred by the participants (68 % at asymT and 77 % at symT). Furthermore, the automatic switching was preferred by 7 subjects at asymT and 5 subjects at symT. Only 3 subjects reported preferences of both switching methods for symmetric and asymmetric techniques. Further, the participants reported preferences for both manipulation techniques (i.e., asymT by 10 subjects, symT by 9 subjects, both by 3 subjects). The presentation order of techniques (experimental day 1 vs. day 2) had no impact on the participants' preferences.
The preference of switching method depends on the manipulation technique. As opposed to this, the preference of manipulation technique depends on individuals. Many participants sensed the symmetric interaction technique as intuitive, i.e., natural and easy handling, less mentally demanding. Nevertheless, the simultaneous manipulation operations complicate precise scaling and positioning at the same time. Furthermore, some users reported that they became even easier with the asymmetric interaction technique after a training period. The separation of the interaction tasks enabled more precise manipulation and they sensed this technique less physically demanding in the long term. Consequently, the automatic switching is rather preferred at symmetric technique than at asymmetric technique.

Physical Navigation
We could observe three behavior pattern according to physical navigation (see Fig. 5). First, a subject used the entire physical interaction space to manipulate the objects sequentially by stepping forwards/backwards (vertical movements). Second, a subject preferred a constant interaction distance to the display and only stepped forwards when the attempt failed (horizontal movements). Third, the subject used a strategy, while he manipulated all objects at distant and afterwards he performed a fine adjustment at close-up range (tactical movements). At asymmetric technique with automatic switching the participants using vertical movements performed the tasks more efficiently, whereas the participants using horizontal movements attained better performance results with manual switching. At symmetric technique the participants using vertical movements performed the tasks with manual and automatic switching more efficiently and effectively. However, the tactical movements benefited from static mapping function.
Using the users' physical navigation profile we also identified three interaction ranges, where participants interacted frequently (i.e., close range udd < 0.9 m, close-middle range udd ≤ 1.5 m, middle-distant range udd < 2.0 m). Far distances (udd > 2.0 m) were used rarely. About half of the participants interacted in close-middle range (12 subjects), whereas the remaining participants preferred close range (3 subjects) or middle-distant range (7 subjects) at asymT and close or middle-distant range (5 subjects each) at symT. We observed that subjects with close-middle range were more efficient with automatic switching. For the other preferred ranges a manual switching was more suitable. Here, clutching could reduced by using coarse-grained precision (absolute mapping) at close range. However, we could not find a relation between user performance and movements or preferred distance.

Discussion and Conclusion
We evaluated how different accuracies affect user performance on distance-aware bimanual manipulation techniques. To consider user preference during object manipulation, we compared a direct mapping approach with distance-adapted mapping approaches that use an automatic task-based or a manual switching method of precision levels. Table 3 outlines the main findings of our user study.
In summary, the user performance is increased by using interaction scaling in LHRD environments. The results have shown that the users are able to effectively solve fundamental manipulation tasks with distance-aware interaction techniques. In fact, interaction scaling improved the performance significantly if a high accuracy (2 mm) was required; but it has no effect at lower accuracy (5 mm). The participants were able to manipulate the objects more accurately with the asymmetric technique than with the symmetric technique. Most of the participants preferred a manual switching of precision levels; however, half of the subjects solved the tasks with automatic switching well.
Basically, the best performance results are provided by the automatic switching method with linear mapping factor at asymmetric technique. Furthermore, the manual switching method with linear or exponential mapping factor provided the best performance results for both techniques. As the manual switching of precision levels required often a minor adjustment after the coarse adjustment, a smaller (exponential) mapping factor was more efficient. At automatic switching of precision levels a continuous adjustment was performed, thus a larger mapping factor was appropriated for effective motor movements effectively (less clutching).
Based on our findings, we recommend to consider the user's preferences and navigation behavior in distance-aware interaction, such as the user's preferred distance and walk movements. In principle, the distance-adjusted mapping factor is appropriate, Table 3. Summary of main findings.

Metric
Results Efficiency/Effectiveness • Improvement of efficiency/effectiveness -Interaction scaling with automatic/manual switching (asymmetric technique) -Interaction scaling with manual switching (symmetric technique) -Accurate object manipulation with distance-aware asymmetric technique Physiological Effort • Workload of manual switching was less appraised in spite of significant higher number of button events (both techniques) • Higher number of button events had no effect at asymmetric technique User performance • High accuracy (2 mm) benefited from interaction scaling • Improvement of user performance -Manual switching with linear/exponential mapping factor (both techniques) -Automatic switching with linear mapping factor (asymmetric technique) Preferences • Interaction scaling with manual switching was preferred • Preference of manipulation technique depends on individuals and switching preference depends on manipulation technique Physical Navigation • Observation of behavior pattern (vertical, horizontal, and tactical): -Distance-aware symmetric technique was efficient with vertical movements -Asymmetric technique with automatic switching was efficient with vertical movements or manual switching was efficient with horizontal movements • Automatic switching was efficient at close-middle range and manual switching was suitable for other ranges but the calculation should not be the same over all interaction distances. For instance, at close range the possibility of coarse precision has to be provided by manual switching of precision levels. In contrast, if the user prefers to interact at distant, then the user should reach a high precision level with a few steps closer to the large display by using an exponential mapping function.
The study has shown that the physical navigation is used differently by the users, e.g., their interaction distances vary. Our results agree with Kopper et al. [20] that users interact mainly from a constant distance during manipulation tasks (horizontal movement), but also with Ball et al. [3] that users prefer to walk during navigation tasks (vertical movements). We recommend to consider various parameters, such as parameters of the physical interaction space (e.g., input device, dimensions of interaction space), objective characteristics of the user (e.g., body size, physiological limitations), and individual subjective characteristics (e.g., preference of bimanual technique, preferred interaction distance) to define the interaction scaling.
The idea of interaction scaling with task-based automatic switching between coarse and precise interaction is only partly suitable in LHRD environments. We recommend to use a semi-automatic distance-aware approach to assist users. That means the user controls the precision of interaction and at the user's request an adjustment should be performed by the system automatically.
In our study we identified behaviors according to physical navigation. However, the behaviors were partially inaccurate. In future work we plan to investigate the user's physical navigation more detailed during object manipulation on a LHRD. Afterwards, we assume to generate user profiles that improve distance-aware interaction by considering individual properties.