Author :
Rotella, Pete ; Chulani, Sunita ; Pradhan, Subrata
Abstract :
Summary form only given. Before a major feature release is made available to customers, it is important to be able to anticipate if the release will be of lesser quality than its predecessor release. Our research group has developed models that use development and test times, resource levels, code added, and bugs found and fixed (or not fixed) to predict whether or not a new feature release will achieve a key quality goal - to be of better quality than its predecessor release. If the release quality prediction models, developed early in the development branches integration phase, indicate a likely upcoming quality problem in the field, another set of predictive models (´playbook´ models) are then developed and used by our team to identify development or test practices that are in need of improvement. These playbook models are key components of what we call ´quality playbooks,´ that are designed to address several objectives: . Identify ´levers´ that positively influence feature release quality. Levers are in-process engineering metrics that are associated with specific development or test processes/practices and measure their adoption and effectiveness. . If possible, identify levers that can be invoked early in the lifecycle, to enable the development and test teams to improve deficient practices and remediate the current release under development. If it is not possible to identify early levers but possible to identify levers later in the lifecycle, we can only change deficient practices to improve the quality of future successor releases. . Determine the potential quality impact of changes suggested by the profile of significant levers. Low impact levers are likely not to be addressed by development teams. . Determine the resource and schedule investments needed to change and implement practices: Training, disruption, additional engineering time, etc. . Using impact and investment calculations identify which practices to change, either for the current relea- e or just for subsequent releases. Develop a prioritization/ROI scheme to provide planning guidance to development and test teams. . Identify specific practice changes needed, or new practices to adopt. . Design and plan pilot programs to test the models, including the impact and investment components. Using this ´playbook´ approach, our team has developed models for 31 major feature releases that are resident on 11 different hardware platforms. These models have identified six narrowly-defined classes of metrics that include both actionable levers and ´indicator´ metrics that correlate well with release quality. (Indicator metrics do also correlate well, but are less specifically actionable.) The models for these six classes of metrics (and their associated practices) include strong levers and strong indicators for all releases and platforms thus far examined. Impact and investment results are also described in this paper, as are pilot programs that have tested the validity of the modeling and business calculation results. Two additional large-scale pilots of the ´playbook´ approach are underway, and these are also described.
Keywords :
economic indicators; investment; prediction theory; quality management; resource allocation; scheduling; actionable levers; bugs; business calculation; development branches integration phase; development identification; future successor releases quality; hardware platforms; in-process engineering metrics; indicator metrics; investment calculation identification; investment components; key quality goal; large-scale pilots; modeling validity; potential quality impact; predecessor release improvement; predictive models; prioritization-ROI scheme; quality playbook; quality prediction models; resource investments; resource levels; schedule investments; significant lever profile; test practices; test processes; test teams; test times; Abstracts; Computer bugs; Current measurement; Investments; Predictive models; Schedules;