Author :
Chan, Hoi ; Segal, Alla ; Arnold, Bill ; Whalley, Ian
Author_Institution :
IBM Thomas J. Watson Res. Center, Hawthorne, NY, USA
Abstract :
Policy-based systems are becoming increasingly common; the emerging areas of autonomic (Horn; Kephart and Chess, 2003) and on demand (Kephart and Walsh, 2004) computing are accelerating the adoption of such systems. As the requirements on policy-based systems become more complex, traditional approaches to the implementation of such systems, such as relying entirely on simple "if [condition] then [actions]" rules, become insufficient. New approaches to the design and implementation of policy-based systems have emerged, including goal policies (Kephart and Walsh, 2004; Bandara et al., 2004), utility functions (Kephart and Walsh, 2004), data mining, reinforcement learning, and planning. Unfortunately, these new approaches do nothing to reduce administrators\´ skepticism towards policy-based automation - how is an administrator to believe that a policy-based system will help his systems perform better? Unless policy-based systems are trusted at least as much as traditional systems, it is unlikely that the acceptance of the policy-based systems will increase. In this report, we describe an approach by which a policy-based system can win the trust of its users, and can continuously adjust itself to make better decisions based on the users\´ preferences.
Keywords :
data mining; learning (artificial intelligence); planning (artificial intelligence); security of data; autonomic computing; data mining; on demand computing; planning; policy system; policy-based automation; reinforcement learning; utility function; Acceleration; Data mining; Energy management; Humans; Learning; Multiagent systems; Power system management; Software agents; Software systems; User interfaces;