Automatically evaluating the efficiency of search-based test data generation for relational database schemas

Kinneer, Cody and Kapfhammer, Gregory M. and Wright, Chris J. and McMinn, Phil

Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering, 2015

Abstract

The characterization of an algorithm’s worst-case time complexity is useful because it succinctly captures how its runtime will grow as the input size becomes arbitrarily large. However, for certain algorithms — such as those performing search-based test data generation — a theoretical analysis to determine worst-case time complexity is difficult to generalize and thus not often reported in the literature. This paper introduces a framework that empirically determines an algorithm’s worst-case time complexity by doubling the size of the input and observing the change in runtime. Since the relational database is a centerpiece of modern software and the database’s schema is frequently untested, we apply the doubling technique to the domain of data generation for relational database schemas, a field where worst-case time complexities are often unknown. In addition to demonstrating the feasibility of suggesting the worst-case runtimes of the chosen algorithms and configurations, the results of our study reveal performance trade-offs in testing strategies for relational database schemas.

Resources

Paper

Presentation

gkapfham/seke2015-paper

kinneerc/ExpOse

Reference

@inproceedings{Kinneer2015,
  author = {Kinneer, Cody and Kapfhammer, Gregory M. and Wright, Chris J. and McMinn, Phil},
  title = {Automatically evaluating the efficiency of search-based test data generation for relational database schemas},
  booktitle = {Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering},
  year = {2015},
  paper = {https://github.com/gkapfham/seke2015-paper},
  tool = {https://github.com/kinneerc/ExpOse},
  presented = {true}
}
Return to the List of Papers