C. Groenewald, S. Attfield, P. Passmore, B. L. William-wong, N. Qazi et al., A descriptive, practical, hybrid argumentation model to assist with the formulation of defensible assessments in uncertain sense-making environments: an initial evaluation, Technology & Work, vol.20, issue.4, pp.529-542, 2018.

R. Kosara, An argument structure for data stories, Short Paper Proceedings of the Eurographics/IEEE VGTC Symposium on Visualization (EuroVis, 2017.

S. Neale, Paul grice and the philosophy of language, Linguistics and philosophy, vol.15, issue.5, pp.509-559, 1992.

A. Neville and . Stanton, Hierarchical task analysis: Developments, applications, and extensions, Applied ergonomics, vol.37, issue.1, pp.55-79, 2006.

S. Toulmin, The uses of argument, 2003.

B. L. Wong, P. Seidler, N. Kodagoda, and C. Rooney, Supporting variability in criminal intelligence analysis: From expert intuition to critical and rigorous analysis, pp.1-11

/. System and . Designer, User Goals 1. Avoid biases 2. Information and analysis coverage 3. Efficiency 4. Detect if user needs help/reassurance 5. Understand how users use a system

. Storytelling,

, Delivering different data (in different ways) for different types of users

, User happiness ("the system gets me

, Most obvious is (4) detecting if a user needs help of reassurance; if the stress is due to issues with understanding the system or the data, the system may wish to reduce the rate of information flow to a simpler level, or perhaps may pop up some tips for how to continue with the analysis. Reducing the rate of information flow therefore also effects (2) information and analysis coverage, while redirecting the analysis path effects (1) avoid biases and (7) delivering data for different types of users (as a stressed user is different from a non-stressed user

, There is, of course, a tradeoff inherent in these intervention options: Front-end: There could be different levels of visibility for the intervention (a spectrum from subtle hints to locking out some system functionality), but each necessitates an interruption to the user's workflow. Back-end: Low risk and no obvious interruption, These interventions in response to user behaviors could either be handled by the front-end or the back-end of a system

, We also discussed how different user models can be classified and characterized. We decided that there are three types of predictive user models: 1. Understand intent: These user models determine the interests of a user based on their interactions

, Predict future user actions: These user models predict future interactions for users during their analysis process (e.g., to increase system response time by preprocessing future analysis or to suggest analysis routes to a user)

, Classify user characteristics: These user models can assist with post-hoc analysis of system behavior for future versions (e.g., learn what types of users most often use the system so that menus and toolbars can be organized for ease of access to common features

, Our discussion instead focused on if the Semantic Interaction paradigm, intended to infer user intent, could be adapted to learn user characteristics. For example: Semantic Interaction 1. Capture an interaction 2. Interpret intent 3. Update the model Modeling Characteristics 1. Capture an interaction 2. Interpret characteristics 3. Update the model Struggle to balance instruction vs. freedom to explore your interfaces. Capturing low-level parameters is much more tractable than predicting the user's intent. Can we specify how many levels of intent we want? Or just low-level and high-level? Participants Sara Alspaugh Splunk Inc, Using Semantic Interaction to Model Characteristics Following our discussion on possible characteristics and system goals that might be included in future systems, we turned our attention to how a system could obtain this information. The example of giving a user a pre-survey to understand the user is useful but also trivial and potentially misleading