DocumentCode
3514622
Title
Automated Generation and Assessment of Autonomous Systems Test Cases
Author
Barltrop, Kevin J. ; Friberg, Kenneth H. ; Horvath, Gregory A.
Author_Institution
Jet Propulsion Lab., California Inst. of Technol., Pasadena, CA
fYear
2008
fDate
1-8 March 2008
Firstpage
1
Lastpage
10
Abstract
Verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system´s autonomy, it´s likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.
Keywords
aerospace testing; automatic testing; space vehicles; Dawn mission; automated autonomous systems test case generation; automated testing; faulted mission-like scenarios; formal scoring metrics; rules-of-thumb strategies; safety-net algorithm coverage; software-only simulation; validation testing; verification testing; Automatic testing; Investments; Laboratories; Orbital robotics; Power system protection; Propulsion; Software testing; Space technology; Space vehicles; System testing;
fLanguage
English
Publisher
ieee
Conference_Titel
Aerospace Conference, 2008 IEEE
Conference_Location
Big Sky, MT
ISSN
1095-323X
Print_ISBN
978-1-4244-1487-1
Electronic_ISBN
1095-323X
Type
conf
DOI
10.1109/AERO.2008.4526484
Filename
4526484
Link To Document