Abstract :
Software project managers have limited project resources. Requests for security improvements must compete with other requests, such as for new tools, more staff, or additional testing. Deciding how and whether to invest in cybersecurity protection requires knowing the answer to at least two questions: What is the likelihood of an attack, and what are its likely consequences? Security analysts understand a system´s vulnerability to potential cyberattacks fairly well, but to date, research on the economic consequences of cyberattacks has been limited, dealing primarily with microanalyses of attacks´ direct impacts on a particular organization. Many managers recognize the significant potential of a cyberattack´s effects to cascade from one computer or business system to another, but there have been no significant efforts to develop a methodology to account for both direct and indirect costs. Without such a methodology, project managers and their organizations are hard pressed to make informed decisions about how much to invest in cybersecurity and how to ensure that security resources are used effectively. In this article, we explore how others have sought answers to our two questions. We describe the data available to inform decisions about investing in cybersecurity and look at research models of the trade-offs between investment and protection. The framework we present can help project managers find appropriate models with credible data so that they can make effective security decisions.
Keywords :
computer crime; decision making; investment; project management; software development management; software reliability; cyberattack economic consequences; cybersecurity protection; investment; security decision making; software project management; system vulnerability; Computer crime; Computer security; Costs; Data security; Investments; Potential well; Project management; Protection; Resource management; Testing; cybersecurity; economics; models;