# ExpOse: Inferring worst-case time complexity by automatic empirical study

### Kinneer, Cody and Kapfhammer, Gregory M. and Wright, Chris J. and McMinn, Phil

### Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering, 2015

### Abstract

A useful understanding of an algorithm’s efficiency, the worst-case time complexity gives an upper
bound on how an increase in the size of the input, denoted n, increases the execution time of the
algorithm, or f(n). This relationship is often expressed in the "big-Oh" notation, where f(n) is
O(g(n)) means that the time increases by no more than on order of g(n). Since the worst-case complexity
of an algorithm is evident when n is large, one approach for determining the big-Oh complexity of an
algorithm is to conduct a doubling experiment with increasingly bigger input sizes. By measuring the
time needed to run the algorithm on inputs of size n and 2n, the algorithm’s order of growth can be
determined. This paper introduces expOse, a tool to derive an "EXPerimental big-Oh" for supporting
"Scalability Evaluation" — expOse infers an algorithm’s big-Oh order of growth by conducting a
doubling experiment automatically.

### Resources

Paper

gkapfham/seke2015-tool-paper

kinneerc/ExpOse

### Reference

@inproceedings{Kinneer2015a,
author = {Kinneer, Cody and Kapfhammer, Gregory M. and Wright, Chris J. and McMinn, Phil},
title = {ExpOse: Inferring worst-case time complexity by automatic empirical study},
booktitle = {Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering},
year = {2015},
paper = {https://github.com/gkapfham/seke2015-tool-paper},
tool = {https://github.com/kinneerc/ExpOse}
}