But how should an asset manager actually utilize these high-potential tools? Again, there are various possible models. A portfolio manager may simply read a report that has been generated through the application of data and AI and then make decisions accordingly; equally, a data signal may feed directly into an algo-trading model itself.
Models may be developed in-house, or they may be taken from a provider (such as KPMG Lighthouse). Or there may be a hybrid approach – working in partnership to develop and enhance a model that can then be applied across a portfolio.
Today’s models are becoming increasingly sophisticated, making use of a range of technologies such as AI, machine learning (ML) and natural language processing (NLP). They don’t stand still either – nowadays, there is not only ML and NLP but advanced variants that make use of techniques such as neural networking.
Whether it’s to screen investments and forecast M&A events ahead of time; monitor investments and forecast default risks ahead of them coming to pass; or news analytics and other external monitoring to pick up potential risk and reputational issues – data and AI techniques provide a powerful tool.
For example, a default predictor that we have developed at KPMG Lighthouse ingests about 20 million data points spanning the previous ten years, covering all public companies across North America. The model can be reviewed, validated and tuned across hundreds of iterations to achieve optimal performance. The output includes explanatory detail too – not just highlighting investments that are highest risk, but listing the factors behind that.
It is important to remember, however, that these tools are intended to augment human judgment, not replace it. They provide insights that would otherwise be difficult (or impossible) for a human being to capture – but ultimately, it remains with the portfolio manager to make final decisions.