The power of data: a CFO perspective (part 2)

In part 1, we explored the evolving role of data in shaping financial decision-making and the imperative for CFOs to harness its intelligence. We discussed the challenges of relying solely on spreadsheets, the importance of understanding data sources, and the need for robust data governance. Alex Meakin, Dufrain’s Chief Financial Officer, now continues into reporting and predictions.


So…much…reporting…

Simple graphic of a camera focusing with a peace sign

Reporting and analysis is obviously a key part of the job for any Finance team with a multitude of recurring and ad-hoc reporting required on financial and non-financial performance. For most CFO’s the ultimate aim of reporting is ‘I want to know what I want to know, when I want to know it and how I want to see it’. 

The demands of reporting have also evolved significantly over time. Years ago the production of monthly P&L performance vs a budget prepared once a year, delivered a few weeks after the end of the prior month was the de-facto expectation of the Finance team.

Nowadays, speed, insight and accuracy are king (or queen) when it comes to reporting and achieving the strived for competitive edge. A variety of stakeholders (including the CFO) will want to understand the very essence of what business’ are doing right and where they need to improve immediately… if not yesterday.

Investing the time to structure your data in a way that enables you to meet these demands is more often than not going to lead to a competitive edge. It’s self-evident that those that make better, more informed decisions faster than others have a greater chance of success.

These days reporting often isn’t a static concept though. A view of a business’s performance and characteristics is likely to have a shelf life as long as it takes to extract actionable insights from it. Beyond this it simply becomes content filler, quantity of reporting over quality of reporting.

Reporting needs to be flexible and constantly evolve, looking at the business through different lenses and from different angles, identifying the most pertinent commercial points that management need to be cognisant of at that particular point in time. 

This also needs to be engaging and tell a simple clear story. We’re all busy after all and reporting that requires significant brain power to ascertain the message has probably fallen at the first hurdle.

Finally, you really don’t want to invest significant overhead in producing and preparing reporting. You want analysts to analyse trends and derive meaning, not spend large time preparing reports, which is just a drag on your profitability.

Visualisation tools, such as Power BI or Tableau come into their own here. In addition to being great data exploration tools they also allow you to create centrally hosted dashboards, that multiple users can access, producing a variety of filtered and pivoted views of your structured data in a professional and engaging manner. 

The best thing about these products is the time to value. They are incredibly intuitive to use, once you’ve learned the fundamentals, and the content produced can rapidly evolve into something that produces insight far beyond what is possible with more traditional tools.

All of this is achievable; however it is clearly dependent on getting the right infrastructure in place, using the right tools and having the right people to use them correctly.


I want to predict the future…

Trends in 2024

As already mentioned, the previous norm for businesses was to produce an annual business plan that would (in many instances) take many months to complete, pulling in multiple stakeholders from across various business functions. Once this behemoth of a process was concluded, output analysis would be produced which usually (sadly) would be out of date by the time it was produced. Quick fixes (fudges) would then be applied to the model to try and crowbar it into something that resembled current reality. 

The business would then track against this Frankenstein’s monster throughout the year, which, save for the most prescient (lucky) of us or those working in particularly benign / predictable environments, would show that by a few months into the financial year real world events had proved our carefully constructed predictions wrong.

Clearly the process as outlined is very much deficient when set against the needs of businesses today and the rapidly changing commercial environments that they operate in. Businesses will often be in a constant re-forecasting cycle, course correcting for things that didn’t pan out as expected or factors that are ultimately out of the business’ control.

Whilst Excel is often the go to solution for modelling and financial forecasting, and you can create some very intricate models within Excel, it starts to hit the buffers when the demands of forecasting today come into view.

The demands of business forecasting

These days, the demands of business forecasting encompass the need to:

  • Constantly evolve assumptions and outputs based on changes in real world conditions and past experience
  • Produce multiple ‘what-if’ scenarios, showing a range of sensitivities 
  • Pull assumptions and historic data points from a wide variety of data sources
  • Have secure and reliable relationships between the variables within the forecast model that reflect the dynamics of the business and commercial environment
  • Allow multiuser input possibly across multiple geographies
  • Have version control and an audit history to understand what (and maybe who) changed underlying assumptions and formulas within the model
  • Incorporate insights from predictive machine learning & AI models into the wider business modelling (more on this shortly).

Often the complexity of business relationships mean that you need a solution that can handle multiple dimensions as opposed the rather 2D relationships that are achievable in Excel.

There are several products on the market today that perform this function really well, especially where it comes to having well defined business relationships in a multi-dimensional context, and these would always be my go-to choices for doing anything beyond simple business modelling and planning these days.

When these business models are sat on top of structured data sources that can pipe real time assumptions into the models, the outputs potentially become extremely powerful in terms of the business’ ability to make real time decisions that generate the greatest return.


AI & machine learning

AI and machine learning the power of data

One of the hottest areas in data currently is data science, machine learning and AI. Companies are pouring billions into research and development in this area in search of new products that will give them a competitive edge in a rapidly evolving landscape. 

Personally, I am a massive fan of all things machine learning and AI and have spent considerable time looking at the theory, technology, and potential applications. I believe it has potentially massive benefits for us as a society. However, there is also clearly a lot of hype, both positive and negative, associated with this technology, the most extreme of which I struggle to subscribe to. At one end I don’t believe it will be the end of humanity. The cynic in me also suspects some of the investments in this area though are not driven by a well thought through commercial strategy, but more from a feeling of FOMO.

From a technology perspective this sits at the other end of the spectrum from Excel, however from an investment perspective, the same principle applies i.e. is this the most appropriate solution for a given problem. For things like customer segmentation, recommendation engines or optimising pricing strategies it can work extremely well, although has some key dependencies. 

For a machine learning and AI model to successfully learn it generally requires a large volume of cleaned data to properly derive meaningful relationships between the various features. Having all your data structured in a centralised repository and knowing that it is accurate and clean is a key dependency for most machine learning and AI models to work successfully.

At a basic level most of these models will be trying to learn by iteratively minimising a loss function i.e. minimise the difference between the expected output and the actual output from the model, given a bunch of inputs. Whilst these models can be quite good at overcoming noise in the input data, they can also be quite sensitive to either under-fitting relationships within the data (i.e. not sufficiently modelling underlying relationships) or overfitting relationships within the data (i.e. modelling very specific relationships within test data that don’t really exist on a wider dataset, and therefore the model performs poorly in a general sense). However, done well and with the right level of expertise and good quality input data these models can unlock insights that would not be possible to obtain otherwise. 

From a forecasting perspective my general experience is that time series forecasting models and the like (i.e. forecasting well into the future) are less developed currently than other applications of the technology. Whilst applications like large language models are using predictive algorithms to generate text, when it comes to performing financial or business forecasts there is a risk of extrapolation error i.e. the further out you go the less accurate or potentially more extreme the predictions become.

Extrapolation error in time series forecasting is not unique to data science models, clearly all forecasting suffers from this. It’s more a case of managing user expectations that the output from these models simply provides a view of the future i.e. it’s unlikely to ever be the magic answer that will tell you exactly how the future will pan out. You must view it in context, apply judgement and consider other relevant data and analysis when determining how much weight you want to apply to the model’s output when making commercial decisions.

It’s clearly an area with a significant amount of potential, however the value derived from these techniques has a high dependency on the volume and quality of data fed in and it’s not always the best solution for every given problem. Again, it goes back to selecting the right tool for the job.


Wrapping up…

A pale blue background with a checklist featuring three lines and three check marks

I’ve covered a number of areas in part 1 and 2 and this is very much a cosmetic run through of some of the topics discussed at no great depth. There is considerably more detail that could be explored in each area (and I haven’t even covered topics like process automation), but that’s for another day and another blog. 

The range of things to be considered could quickly become quite overwhelming, but that’s one of the key points. Do I think as a CFO you need to do all of this? If it’s appropriate for your business then maybe, yes.

But it won’t be the right answer for all companies. There isn’t a one size fits all when it comes to adopting a data strategy. You can mix and match different elements of this and invest more in some areas and less in others.

The key thing is knowing what will give you the biggest bang for your buck in the context of your budget. Getting good advice that you can trust on what is appropriate for your specific situation is key which is why choosing the right partner is so important.


Speak to the data pioneers today

If you want to explore more of what you can do with your data, whatever your stage of data maturity, then we have experts who can give you the right guidance to get you where you need to be. Get in touch with our team who will be happy to help.

Thanks for reading!