Why Not Just Backtest?


  • Greg Samsa Department of Biostatistics and Bioinformatics Duke University 11084 Hock Plaza Durham NC 27710, USA




Artificial intelligence, Backtesting, Big data, Predictive modeling


Backtesting involves applying an investment strategy or predictive model to historical data in order to assess its performance.  Here, we apply general statistical principles to the question of whether, when and why backtesting is likely to be successful.  Our use case is the JP Morgan Equity Income Fund (JEPI), an exchange-traded fund whose investment strategy is advertised to be supported by artificial intelligence, including backtesting. One problem with backtesting stock returns is that suitable external datasets to use in assessing model performance are difficult to identify.  Another problem is that the modeling process can lack a clear separation between the training and test samples.  Neither of these sources of bias are fixed by using big data analytics.  An implication of these difficulties is that the performance of AI-derived predictive models is likely to be overestimated, even in the presence of backtesting.  Moreover, backtesting will not fully assess how well a predictive model generalizes to other circumstances.  The results of backtesting should not be taken at face value.  Modelers are encouraged to explicitly describe what is being assumed for an investment strategy to continue to perform.    




How to Cite

Samsa, G. (2023). Why Not Just Backtest?. Archives of Business Research, 11(5), 72–79. https://doi.org/10.14738/abr.115.14677

Most read articles by the same author(s)

1 2 > >>