Towards a Computational Model of Information Trust

. Information has been an essential element in the development of collaborative and cooperative models. From decision making to the attainment of varying goals, people have been relatively adept at making judgments about the trustworthiness of information, based on knowledge and understanding of a normative model of information. However, recent events, for example in elections and referenda, have stretched the ability of people to be able to measure the veracity and trustworthiness of information online. The result has been an erosion of trust in information online, its source, its value and the ability to objectively determine the trustworthiness of a piece of information, a situation made more complex by social networks, since social media have made the spread of (potentially untrustworthy) information easier and faster. We believe that this exacerbated the need for assisting humans in their judgment of the trustworthiness of information. We have begun working on a social cognitive construct: a trust model for information. In this paper we outline the problems and the beginnings of our trust model and highlight future work.


Introduction
Information does not exist in a vacuum, how it is perceived and used is influenced by a number of social, cultural, and historical factors.Moreover, information is of no value or worth the investment of time and money, for example in making business decisions, or decisions in elections, if it is not relevant, does not have the right amount of detail, cannot be easily stored in a way that it can be accessed effortlessly, or easily understood by the end user [17,29,13].
But we have a problem, not necessarily new but increasingly challenging, since we are seeing 'information only subsistence,' which focuses on an issue by simply offering more information [8].This has resulted in a need to share information, even when the validity of the information cannot be vouched for, or when the person sharing such information does not believe it.Regardless of the validity of information, people still go ahead to share because it serves a narrative, a means to manipulate rather than to inform, as a source of social influence [4].This situation is aptly demonstrated by the recent political and business climate in the west that have added relatively new lexicons like "fake news" and "alternate facts." The consequences of deceptive and misleading information can be far-reaching for governments, citizens, business institutions, data professionals, and designers.It can create an atmosphere of mistrust, distrust, confusion or panic.It can influence decisions and damage reputations.Information agents or brokers may find it difficult to use information, or seek alternate and less reliable sources of information because of the air of uncertainty.The result is an erosion of trust in information and potentially a fragmentation of, and polarisation of, societies.
What is needed is a way to measure the veracity and trustworthiness of information.As a first step, we propose an information trustworthiness model based on computational trust [16].
The paper is organized as follows; in Section 2 we examine related work, before presenting our proposed model in Section 3, and worked examples in Section 4. We conclude with limitations, and ongoing and future work in section 5.

Information is Social
Information and by extension misinformation has a social undertone, because it is seen as an observation of individuals or groups formed through cultural and social observance, as Tuominen and Savolainen put it "a communicative construct produced in a social context" [33].To understand what construe information, it is imperative to explore information behavior in a social context; that is conversations among a social group that such individuals or groups use as an index to construct some semblance of reality [8].People by nature are drawn to misinformation and tend to share such information even if the belief in it is nonexistent, partly due to the promise it holds as well as due to any psychological predetermination.As social beings we adopt a collective approach to order, collaboration, cooperation and knowledge [8,13,29].Hence the need to look at the social side of trust in tackling the social undertone associated with misinformation.

So, Why Use Trust?
Technological advances have made communication seamless, connectivity and information sharing an essential part in the proper functioning of many systems.Technology trends require openness and interaction between systems, which are continuously increasing in number, complexity, and functionality, giving rise to management, privacy and security challenges [15].
Trust management has evolved over the years, with some computational trust models focusing on different application and propagation areas.Computational trust and its related associations have drawn from social, cultural, psychological and many more aspects of relationships, trying to model the best in these relationships computationally.Trust as a computational concept is essential in understanding the thought process about choice, options and decision-making process in human-computer interactions, especially in situations where there is a measure of risk [16].
Privacy and security are at the vanguard of many information systems, compromise and abuse of these systems bring about distrust.Trust research extends over a wide range of concepts and computational environments; focusing not just on trust formalization and management but also trust siblings like mistrust, distrust, regret, and forgiveness [16].

Information Behavior and Trustworthiness
Various information behavior models, suggest a normative model of information as true, complete, valid, able to be relied on as being correct and from a trusted source [13].For instance, census data from Statistics Canada (or indeed any governmental statistics agency) can be regarded as valid and from a trusted source which can be reliably used for planning purposes [27,34] As an economic tool, such data should carry more trusted weight than information sourced from a third party sources or social media platforms [13,29,27,6].Other normative information behavior prescribes trusted information as timely in the sense that it should be from a precise time period [13,29], for example when analyzing census data for planning and developmental purposes it is paramount to look at current or the most recent figures.
Other factors that add value and trustworthiness to information include its accuracy, consistency, and completeness.Despite the best effort of information scientists on the nature of information [13], and work on information literacy behaviour, misinformation and disinformation still permeate social networks [17,13], social media platforms like Twitter and Facebook have helped in the spread of inaccurate information [13,29].
Computational Trust [21,11] is important in understanding the thought process with regard to choice, options and decision-making process in human and computer interactions, especially in situations where there is a measure of risk [16,12] (which is all trusting decisions [22].
There is the need for an inclusive and context-aware information literacy behavior [13].Our goal is to incorporate the characteristics of information: reliability, validity and importance, into a trust model.Depending on the context, the model will also factor in the reputation of a source, the value of the information, its provenance, and cues to credibility and deception.The aim is to enable agents to make judgments and situational decisions about the trustworthiness of information.

Significant Prior Research
The problem of trust in information has been around for a while, in many prehistoric civilizations, the information conveyed by an of an emissary of an authoritative figure on behalf of the principal could be trusted completely because the outcome of any omission will leave the conveyor at the mercy of his principal.Such measures well beyond borderline extreme, in contemporary times worries about information trust, quality and problems associated with information are usually addressed by offering more, an approach which supports the narrow focus design paradigm.This situation has only been exasperated with the speed of information outpacing the speed of human travel, having a life of its own, constantly evolving with technology, now at a stage where the information lifecycle is becoming independent of human intervention [8].
People are adept at dealing with conflicting information to an extent based on judgment derived from knowledge, either based on what was known before or most current knowledge [17].Humans are social.The information DNA architecture takes advantage of this trait by integrating social knowledge into distinct parts of information.To link the divide between what is possible with information by agents and people, by doing this it gives importantly added metainformation.Marshs InfoDNA paradigm follows the information as an agent paradigm of ACORN [19], where agents deliver data, as well as meta-data relating to the information.The infoDNA architecture added a society's estimate of trustworthiness, to an extent.A dual trust rating and a ranking system geared towards people, arguably first of its kind that is interested in the value of information based on feedback even if it is contrary, using the trust to foster collaboration and cooperation [17,19].
Other approaches have looked at the problem of misinformation, computationally regarding trust.Much research on information trustworthiness does not put a great deal of emphasis on the factors that lead to hoaxes, misinformation and deviant information behavior.Silverman [29] looks at the problem from a journalistic perspective -the role not just news organizations but also technology companies play in driving the narrative in a bid to boost traffic and social engagement -as well as possible solutions.Among the 'bad practices' identified are the tendency of news sites to offer little or no rudimentary proof to claims but rather engaging in relating such claims to other media reports which in turn did the same thing.Hence the origin of an assertion which they transmitted is buried amongst chains of links, and on proper examination, the original story might have originated from an unreliable entity or social media platform, whose initial goal might have been to gain traction (clicks) by getting mainstream media outlets to refer to their content.
Information, as technology, does not occur in a vacuum, and is often socially influenced or constructed.[8] highlights this and proposes the adoption of not just balance and perspective in making sense of information but breath and vision, to feature the cues that lie at the periphery of information.
The world of information is complex.This is exacerbated by technology.Clues and cues, adopted to restrain deviant information behavior, are insuffi-cient and lacking in some instances.Hence the need for a trust model that not only considers the physical environment which precipitates information but also incorporates the social context of information.

Information Trust: The Model
It is widely assumed that technology, and to a greater extent information occurs in a vacuum [8], often overlooked is the milieu like tradition, environment, and judgment embedded in information, which acts as an important variable in the successful dissemination of information [13,29,8].The success of information as an integral part of social relations can be attributed to the wide-ranging support of strong communities and institutions [8].The human factor in the alternate information behavior can be overstated, and squarely blamed for the recent upsurge in the misinformation paradigm, but often overlooked is the inescapable intertwining of information, and individuals as part of a copious social matrix.Often enough, proffered solutions often assume that the best approach in tackling misinformation, and by extension the challenges of technology lies in more technology, using Moores approach [8,10], forgetting or deliberately neglecting the need for balance, perspective, and clarity by failing to look beyond the edge, at context, social cognition, and resources [13,29,8].That is focusing on the pebble and the ripples but not on the lake [8].
The social context [8] surrounding information is important in understanding information behavior, and should be factored in the design process of information technology solutions.But this is often neglected, the environment, human habits, and judgment play an important role in the successful acceptance of a solution.Information is inevitably embedded in social relation, information does not work unless supported by viable communities and institutions [8,13,29].In as much as the support, the social context of information provide in forming a shared disposition,a community of trust, there has to be a way for information agents to express their comfort level based on trust ratings ratings.Here lies the concept of Device Comfort, earlier introduced in this work, a paradigm that aims to empower the device to better reason about its owner, the environment [18,30,20], by factoring context and relationship.The focus here is on the devicehuman relationship, which is aptly represented in device customization from selecting screen savers to ring tones and even the use of various third party applications.The methodology builds on an augmented notion of trust between the device and its owner, to better enable the device to advice, encourage as well as potentially act for its owner in everyday interactions [17,16,18,20,30], from information management, in this context how to relate trust rating on a piece of information by expressing comfort or discomfort, to personal security.
Theres usually a larger context to the different information delivery methods, as social beings we look for meaning, and understanding in clues and cues, gaining insight from unexpressed meaning [8].Unfortunately, contemporary Information paradigm is so narrowly construed, leaving us with very little insight, hence the need for an information model that takes into consideration, the re-sources at the edge of information, not limited to physical constraints.But embraces the social context of information, the institutions, and communities that shape human societies, and our understanding of the challenges of deviant information behaviour, not getting caught up in the noise, hence avoid addressing the problem by simply offering more [8,13,29,10].
In using mathematical notations toformalizea social cognitive construct like information behavior, and trust.we trying to allow for tractability, capturing the cost and benefit of information -the value to an organization or entity, acting or making decisions based on misleading data, which is considerable taking account of the effort and resources that go into such decisions be it emotional and or physical, which could be simulated mathematically.Since a lot of resources go into the decision-making process, great opportunities are expected, and when they are not met, the cost is high.Hence trust is affected [21,16,3].Our approach is not new; there has been a long-established tradition of trying to formalize social concepts using mathematical methods, George David Birkhoff [7,26] an eminent mathematician in the early 19th century formalized aesthetic measure, which has been built upon by Max Bense, Abraham Moles [5,24] in developing information theoretic aesthetic.Scha and Bod took it further by postulating the integration of other ideas from psychology, and computational linguistics to form a foundation for the development of robust formal models.[28] Our model builds on elements of Birkhoffs model [7], drawing parallels by integrating aspects of context, author, and observer, in this case recipient, in information.We see value of information stemming from a symmetry of the elements that helps shape information, the social context -environment, tradition, communities, background, history, shared knowledge, social resources, institutions.These items are not irrelevant; they provide valuable balance and perspectivethe amount of content; deviant information behavior is often characterized by lack of crucial information, and a combination of information characteristics, importance, and utility [8,29,13,6].Information does not happen spontaneously on its own; it is a process of selection and reflection, it is made and shaped by some factors, a weaving and shaping process in concordance with space and priorities, in the context of the medium of its audience, which requires harmony.[23] This harmony (H), is introduced as the quotient of the amount, and quality of content and it's order order (o), and the number of element that help shape the information, or complexity .Drawing parallel with Birkhoffs equation [7].
Information wants to be free, having a life of its own and used for a variety of purposes, from a business decision to storage and retrieval.It is paramount that the information is useful and helps in the decision making process.Society has the challenge of speaking for itself about its credibility as do information, resulting to Moores law in the face of distrust, more information in an attempt to address this problem.Ironic since we do not add to our standing by averring to our trustworthiness, but we let our character speak for us.The qualities of good information should speak for itself.[8,13,29] X represents the characteristics of useful information, currency, relevance, authority, accuracy, and purpose.A measure often referred to as the CRAAP test developed by librarians at CSU for evaluating web resources.[1] Description of these characteristics is in order, to highlight their importance.
-Currency: places value on the timeliness of the information, when it was published, updates or revisions.If there are links or references to the information, their credential, and significance like shares, likes, repost in the case of social media platforms.These chains add context, details to the story or information.
-Relevance: Relates the information to the recipient needs, it relays the intended audience, the significance of the information to the recipient topic of interest or questions.Is the level of information suitable for needs?Is the source reliable and compared to other sources is there confidence in its use?-Authority: looks at the information source, the author or publisher, their reputation and credentials, any affiliations, qualifications, contact information an if there exist any revelation about author or source.-Accuracy: Deals with the correctness of the information content, evidence supporting it, reviews, references.Where it comes from, and independent corroboration either from an independent source or personal knowledge.Any obvious biases and tone and structure of the content.-Purpose: Looks at why the information exists, its justification: to inform, teach, persuade.Are its intentions obvious, objective?An opinion?Fact, or propaganda?And are there biases based on a worldview, political leaning, institutional or personal view Each characteristic is scored on a scale of [-1, +1], -1 distrust, +1 trust.Each information object is graded on a trust continuum, with the average signifying the trust rating.While many of the CRAAP elements were designed with online media in mind, they hold true for a cross-section of information media.
Knowledge is an important parameter in the decision-making process, its not enough to know how, but to know when to apply what is known.We introduce the quantity Knowledge (K), which is a combination of tacit and explicit knowledge.K = (Kt+Ke).Ke explicit knowledge, the knowledge that could be drawn upon is not enough, but when to use this knowledge Kt is equally essential.Value is further enhanced with a combination of these two knowledge domains.The thinking is not to treat knowledge as a collection of discrete parts, but a mosaic made up of a blend of different elements all contributing to create the image [32,25,9].
Knowledge flows with ease in an ecology -a community of shared interestit produces a synergy of collective wisdom and experience, a narrative of a sort, a sequential presentation of causes, effects, and events.Individuals and groups give different forms of narratives, a scientist in experiments, researchers in their research, an economist in models, all an important aspect of learning and education [8,2] with the aim of delivering information and principles which apply to different situations, times and places.It is not the common narrative per se that brings people together to form an ecology of collaborative information sharing, but collective interpretation, a basis for elucidation.Herein we introduce a network of shared interest, an Ecology(E) in the model.A knowledge base of representatives insights, collective pool of knowledge and insight which allows the sharing of knowledge.The concept of a world view of related information, that is consistent and around the same topic.[8,2] Here in, it is worthy to note that Knowledge resides less in its structures but in people [8] and it is harder to isolate, unlike information, learning like knowledge is much more than information, or search and retrieval.As exemplified, when expertise is lost.We learn by repetition and practice shapes assimilation, hence the need to pull from a collaborative pool of learning and knowledge not to simply get information but to allow to allow for sagacity and use the information to predicate divergent information behavior E=h+v Cluster or Ecology (E) comprises of horizontal view (h) and a vertical view (v), the horizontal view (h), a community of practice draws on the collective pool of knowledge from a community or an ecology of complementary practice.Bound by proximity, similar ethics, and culture but inhibit the flow of knowledge outside the cluster.To address this pitfall, we combine it with a vertical view of clusters that are not bound to these communities by geography or ethics or by a shared practice to allow them to share knowledge, which might have been bound within these structures by processes.The goal is to capture interchange required to preserve these resilient and informal relations needed to allow the flow of knowledge.Lastly, time factor (t) is introduced to preserve and deliver information across space and time.It is vital since information structures society and trends dazzle which lexicons like fake news or information currently do, in order not lose sight of what is essential and preserve its value for reuse.It also frames the way information is presented and much more, giving context to how we read, and interpret, were to read, meaning and importance.
For simplicity, a metric that takes in values [-1, 1] is used.Where -1 represents trust in the negative, or no confidence, distrust.0 depicts a state of uncertainty or ignorance, not enough information to decide to trust an information object and +1 a point in the continuum where we have enough to trust.
To estimate the value of Information, we have: X), emphasizes the characteristic of information (a+b+c....n), which include, currency, relevance, authority, accuracy and purpose.The value of the Importance and Utility in context is added to the Information Characteristics.
For balance and perspective the H factor, elements that help in molding information is factored in as a product.Reputation, (R) = [ s+δL K ]E, with δL signifying captured changes as a result of links the information has passed through.Trust in Information T(i) is then seen as a quotient of the cumulative of the product of Value and reputation with time, for fixity or immutability.
T(i) = V +R T Scenario In this section, the model is put through a use case example taking into consideration context, environment, applications and possibly "personalities" of agents, this vital considering in a cooperative, collaborative and decisionmaking processes, a lot of factors come into play.Alice and Bob are high-frequency traders, they both have agents that help manage their busy lives, including personal and professional matters.Alices and bobs agent belong to the same trusted circle.Bob is away on a remote retreat somewhere in the Amazon.Alices agent comes across an unverified report suggesting a product lunch in one of the upcoming tech companies is about to be canceled due to a security flaw in their groundbreaking.Alices agent does not have the mechanism to verify the viability of this report and passes the information to Bobs since they both belong to the same trusted cycle, and Bob has an interest in this company.Bobs agent acts on this information and sells short, hoping to avert any loss from the expected gloom, which turns out to be false, company Xs stocks rebound and Bobs portfolio is left at a loss.Bs (Bob) agent gets hold of Bob and explains the situation, which leaves a bitter taste.The cost could be more if some other agent in the cycle decides to let this information out in the open, which could potentially have broader economic reverberations.The trust rating for As (Alice) agent is reduced, and the need to upgrade the information trustworthiness mechanisms of both agents is realized.Agent A could have avoided this scenario if its value assigned to information was based on a social aware trust model, using an objective state of information, its source, reputation, behavior and the social context surrounding it.
We have two agents A and B, after receiving the information A will require more insight before deciding to act, the trust threshold to respond positively or negatively depends on some factors; the importance and the utility attached to this scenario , which will be considerably high based on the cost-benefit ratio.It is safe to say the benefit and importance attributed to information directly affects the pressure to act, in a different context say agent A passes information regarding ongoing sales of a required household item available at a particular outlet; agent B can decide not to purchase this item from the proposed store.Though the item may be needed and the value placed on this information is high considering all available parameters, the importance in this context does not require the agent to act.In this context, B's threshold is high; A has provided information without content, hence factoring all the elements connected to the model, the metric attributed to V(i) will be low.The only components that work in her favor is being a member of A's trusted cycle.Though presently her reputation may be high based on previous behavior, there is little content regarding the information either from A, or other similar links represented by δL.
Because of this lack of content, the value attributed to X = ( n x=1 X), -elements constituting the quality of information like justification, purpose-, will be low, invariably having a negative impact on Harmony (H) factor = o/c because of lack of balance and perspective.Depending on circumstances the weights attributed to some elements varies, B might decide to revise its threshold to invest based on positive metrics assigned to the cluster (E=h+v), and knowledge (K = ke+kt) values.Because it has tried to glean insight from experience and looking at ideas drawn from a collective pool of insight within B's trusted cycle, while not neglecting related communities who might share joint membership or agents who interact and are linked together by complementary practice but separated proximity, ethics and probably culture.Agent A might adjust its threshold based on its updated knowledge of the situation.

Mathematical Analysis
I and U play an essential part in trust decisions, and they are a measure of the cost, benefit, and risk ratio involved in a decision and varies depending on context.Agents could adjust their decision based on their value, though trust might be low, the risk of not acting could be high.For simplicity, we have chosen the values [-1,+1].For the first scenario T(i)=V+R/t for Alice and Bob, to calculate trust in the information, we first look at the values for the characteristics of information: R = [(0.6+0)/0]0.1 =0.06 T(i) = 1.24 +0.06/1 = 1.3 Time(t) is constant.Considering the values of I and U are higher than the trust value in the information, and taking into consideration its lack of content, Bob's agent, will be better off ignoring the information from Alice's agent.we have for simplicity in our model to aid in understanding and make it easy to relate to, in a bid to help people think more, have a second thought, before sharing information The concept of trust and information is not novel; there have been several studies on the idea of reliability in information, and it's source cutting across diciplines [29,8,13,4,6].Recent events in social, economic and political spheres have given rise to continued interest, proposing solutions ranging from experiments, technical solutions, policy, law, journalism, and other social sciences endeavors.The inference drawn so far from this work suggest technology tends to outpace policy and law [14,31], a trend not different in the fake news situation.There are various arguments on where the focus should lie in tackling the challenges raised [14,13,29,31].In the course of our research these questions, whether policy, law, or technology should be at the forefront in efforts to address the situation, has also come to our attention.
This work presents an overview of the trustworthiness of information, and it's source from different perspectives, thereby proposing an information trust model based on a multidisciplinary approach.We believe there is no one perfect solution, and a multidisciplinary approach encompasses the strengths of the various disciplines.The model in this work is based on such an idea, presenting information trustworthiness as a social phenomenon, which requires a social context in tackling some of the ills associated with deviant information behavior Our approach is not novel in that, we set about trust in information, in a computational environment, its propagation and representation [21,16].With the focus on enabling agents to interact with, or factor in the "noise", and background of information, in making decisions with trust playing an integral role.Further refining is needed to incorporate decision support and justification trough comfort, AMI interface, information sharing, and management systems.
So far we have focused on the following areas: -The nature of information -Information Characteristics -Information behavior and literacy -Information ecosystem and anomalies (Misinformation, disinformation, rumors, fake news...) -History and Dynamics of rumors -Trust -Device Comfort -Information Theory In the course of our research, we have looked at the nature of information, information behavior, and literacy.The inference drawn indicates the problem of misinformation, disinformation, hoaxes, rumors, and unverified claims is a persistent challenge which appeals to our social cognitive existence, and there are few signs it will just go away by highlighting the problem.It is a narrative that requires a multi-disciplinary approach because the contributing factors as discussed earlier are not only driven by technology alone, but there are sociological, psychological and economic undertone [13,29].Modelling these factors is not straightforward because of the consideration of inherent biases among groups in society.Another particularly exciting challenge is the communication of information trust metrics to nonartificial agents in a way that will enhance value and trustworthiness of information an essential property in an increasingly interconnected and data-driven world.The next phase of the research is the model implementation, using game theory; a well-recognized methodology often employed in many scientific as well as social science disciplines for performing research, experimenting with agents and enhancing strategies.The simplicity of the Optional Prisoner Dilemma games makes it an ideal tool estimating the value, cost, and benefit of the model as well as measure the performance of the agents, and the society.Finally, model evaluation to gauge performance, compare results and explore the possibilities of incorporating the model into agents in an IOT and media platforms, towards the goal achieving a verifiable and trustworthy information model.

Table 1 .
Notations and Value