Machine learning is a sub-section of artificial intelligence (the science of developing machines to perform human tasks by using advanced analytics and algorithms). It is iterative in nature; machines develop responses to data, which are then fed back into the system, allowing the machine to provide increasingly sophisticated responses. Although the data used is typically internal historical records, the increasing access to larger quantities of data via data trusts (data banks which link data producers and consumers) provides valuable opportunities to produce more accurate forecasts.
Marketing: analysis of internet activity, previous campaign responses and app usage to forecast the effectiveness of future campaigns.
Algorithmic trading: previously, banks used algorithms to make transaction decisions based on shares reaching different purchase/sale prices. However, machine learning will enable banks to make more informed decisions, by taking into consideration market behaviours, strategies and trade predictions.
Customer experience: machine learning can be used to improve the responsivity of chatbots, consequently improving the customer experience. An example is the new Erica system, implemented by an American universal bank.
Fraud prevention: predictive analytics can be used to analyse large quantities of data to highlight and prevent fraudulent transactions.
Risk Management: identification of market trends that could affect the risk of investments.
Currently, machine learning has a fairly robust and growing presence within the banking industry. However, it is still relatively unused within finance, and hence provides a valuable resource to leverage.
Machine learning forecasting provides highly accurate and predictive forecasts, enabled by the iterative nature of machine learning where information is continually fed back into the system. This combination of using both historical information and new data (both internal and external), enables highly accurate forecasts to be produced.
A large bank was trying to develop a forecasting product for their clients in response to the inaccuracy of their existing in-house model.
KPMG built a prototype Minimum Viable Product (MVP), as well as provided recommendations and support during the build and implementation.
The solution proposed was developed by KPMG in the US. The highlight was the development of the automated analytics pipeline, used to help the client scale products and improve forecasting.
The pipeline provided scalability relating to: data cleansing, exploratory data analysis, model evaluation, and forecast generation. In addition, various time series models were created and compared, andthe best was chosen to produce the forecast for a given horizon.
KPMG also analysed relevant external signals to uncover further forecasting opportunities.
KPMG produceda robust, intelligent forecast with a high average accuracy of approximately 90 percent (30-day horizon) and 96 percent (7-day horizon).
In addition, KPMG demonstrated how external signals can be valuable when forecasting over mid-/long term time horizons. As evidence, models utilising the KPMG signals repository demonstrated improved accuracy by approximately 4+ percent points at 30 days and 8+ percent points in 12 months compared with models that did not use the platform.