Author_Institution :
Fac. of Sci., Univ. of Ontario Inst. of Technol., Oshawa, ON, Canada
Abstract :
Trust and privacy are ancient social concepts. Work on formalizing trust dates back to the mid-nineties, while work on formalizing privacy is in its infancy. The two concepts have a number of similarities including considerations of information type and sensitivity to inform actions, relationship between the communicating parties, and the context or purpose for communication. There are some key differences. Privacy, unlike trust, is legislated. In Canada, there are also a number of regulations, directives and policies that come along with the legislation. Trust, on the other hand, is the Wild West; almost anything goes. Early attempts at formalizing privacy have been largely restricted to P3P initiatives and other policy developments. Not only have they been largely ignored by the user community, but also because of the limited scope in application seem to fail to actually enable privacy protection. On the other hand, early trust models have taken a different approach. Instead, using a natural science approach and artificial agents, trust is circumscribed, simple and most importantly - repeatable. In the context of failed attempts at formalizing privacy through policy, learnings from trust can be utilized to advance the computational notion of privacy protection. This paper takes the work on formalizing privacy in a much needed new direction by examining the potential of an appropriate and applicable framework for privacy based on extant trust formalizations. It proposes a formalization for privacy can be based on trust, and would outline the types of privacy, examine privacy based decision-making, and explore the applicability of the agents as appropriate representations of people in the computational environment.