Send us your questions!
Hello all and welcome to a new regular feature from the ICAEW Excel Community where we endeavour to get ‘Your Questions Answered’. In this feature, we will address the Excel questions and worries you send in by providing you with solutions and guidance on the best practice of using Excel. Your community needs your questions. Please send as many as you can to us at:
The question
Is there a limit to the number of drivers you can have before overcomplicating the model? What would you do where there are too many? I'm thinking in terms of OPEX drivers.
Answer
I would like to draw upon a quotation attributed to Einstein: "Make everything as simple as possible, but not simpler". This principle applies in financial modelling very aptly, and I interpret it to mean to simplify things but not to the point of losing value ie, there's no sense in having detail, sophistication or complexity in a model which doesn't add anything to the stakeholders. It's important to remember that the required level of detail will vary according to the use case, so an operational model will require more detail than a strategic model, for example.
When faced with a client or stakeholder who is aiming to add 'too much' detail my first instinct would be to confirm with them the rationale for adding that detail – to confirm whether it is adding value, has a material impact on outputs or whether something could be simplified. If it is required, then there are ways of dealing with it.
In particular, when it comes to scenario managers there are several techniques which I would use to make them more manageable:
- Default values – I would set up scenario 1 as my "base case". The user will select an active scenario. In the column on the left which picks up values from the active scenario I would write a formula which would pick up the base case value (ie, from scenario 1 column) if the column for the active scenario is blank on that row. This means that OPEX drivers, for example, which are unchanged across scenarios can be entered/maintained in the base case column only. That makes it easier for these common values to stay consistent (as the base case is used in most instances) and easier to see when one of the other scenarios has an alternative value.
- Outline Grouping – The use of outline grouping can hide groups of rows by OPEX section. A good tip here is to click on the Data tab of the ribbon, and then on the small icon in the lower right corner of the Outline section to call up the Outline settings. Once here, untick the "Summary rows below detail" box. You can now set up a section heading row and group/hide the rows below that heading and the [+] button in the margin will be adjacent to the section heading row.
- Nested Scenarios – This was covered on the webinar, but the basic idea is that some inputs (eg, time series) could be input on a different page. They could be laid out as one row per scenario or even as an input block per scenario with the blocks either going down or across the page. Then there would be a row (or block) which is for the chosen scenario, and it will pick up data from one of the other rows (or blocks) using a CHOOSE function, with the parameter of which to choose being powered by the selection on the main scenario manager.
- Hyperlinks – The required OPEX drivers may be very numerous and arranging them into a single column per scenario could make the page quite tall. The use of a 'contents' list at the top and then hyperlinks to the relevant sections will make navigating the page much easier.
In summary, Plan A should be to keep things as simple as possible, but where the project justifies it then more inputs can be handled and applying the techniques above will make them a lot more manageable!
The question
Is Excel still your go-to application for model building, especially for larger more complex modelling? If not, what are the best alternatives?
Answer
For me, yes, it's my go-to for 3 statement modelling in particular. Some of the other so-called forecasting software on the market is actually only good for budgeting a P&L – and most of them do a really good job of that, but they usually lack the functionality to generate a proper integrated balance sheet and cash flow. Sometimes these solutions can work hand in hand with Excel, where the budget software outputs a base case P&L and then Excel is used to calculate the balance sheet and cash flow, however there are of course some P&L charges which are determined by balance sheet (eg, depreciation, interest on loans etc) so it's not a perfect solution.
Excel has a lot of capability to handle data via PowerPivot and Power Query, however it is still a spreadsheet application at the end of the day, and it's probably not ideal to attempt to perform hardcore database functions. For models which require a large volume of transactional data it might not be ideal, rather some form of dedicated data processing software could be better for doing the pre-processing and then a summary output imported as inputs to an Excel model.
I think the key is using the right tool for the job, Excel is great for bespoke calculations – it's ubiquitous, files can be shared quickly and easily, most people have a basic understanding of its operation etc. For quick analysis, it's great; for more in-depth and complex models, it's still great in my opinion, but does require the model builder to apply financial modelling best practice, in order to minimise the risk of errors. The Financial Modelling Code sets out further best practice.
The question
Would you typically do any stochastic analysis on these (eg, a Monte Carlo scenario) looking across all scenarios?
Answer
A quick recap for those not familiar with the term – a Monte Carlo model is one where each of the inputs is defined, not as a single hardcoded value, but as a probability distribution (which is similar to a random number but with some parameters which effectively influence the value of that random number). For example, rather than putting in revenue at say £100m you might define it as a normal distribution with an average value of £100m and standard deviation of £10m. Each time the model is re-calculated a new value for revenue will be generated according to the probability distribution – ie, in this example it will always be near to £100m but will vary each time. Usually, multiple variables will all be driven by probabilities and so each time the model is recalculated the results will vary as all of those inputs each change. A macro, or sometimes a plug-in for Excel, is used to re-calculate the model hundreds or thousands of times and the results of the model are recorded each time so that an overall average or 'expected value' can be calculated.
This approach is particularly helpful in relation to valuation models, where the value of a company will be dependent on how multiple things develop, usually these are things outside the company's control – eg, interest rates, level of sales etc. – which can be represented as sliding scales of random numbers. The overall average evaluation can be calculated as the average of hundreds of thousands of tests to reflect all those external factors which can't be controlled.
This is probably less relevant to the models which would have the type of scenario manager showcased in the webinar – as those are more likely to be strategic planning models where the inputs which are controlled by the scenario manager are more likely to be things which are within the control of the company. The purpose of the strategic model is around helping management to make the optimal choice around things like choice of project, choice of lender etc. which are all mutually exclusive decisions. So, there's no relevance to management to calculate an average position, rather they benefit from running the scenarios discretely and comparing the options.
Hopefully, that makes sense – the main point is that Monte Carlo analysis is best for when you want to evaluate the effect of external factors, whereas the scenario manager in a strategic model is best for when you want to compare discrete and mutually exclusion options which management is in control of.
The question
What are your thoughts on incorporating macros into models?
Answer
I think macros have their place, but for clarity I usually want them to do 'the minimum' with all calculations being done on the sheet and only minimal work done by the macro. This is to make sure that the users of the model can easily understand what the model intends to do without having to review and understand how the macro code works. These are my 4 favourite reasons to use macros in a financial model:
- Copy and Paste – By defining a copy range and a paste range a very short macro can be used to crib values from one area to the other. It is particularly useful for breaking circularities for things like corporation tax payments and debt sculpting. Note, care should be taken to ensure that the macro is run when the model is updated.
- Goal Seek – Using goal seek to optimise an input in a model – flexing a revenue or cost input to achieve a required IRR, for example – can be very helpful. It can be made more user friendly by specifying the target cell (output of calculations), the goal value (the user entered target in a cell), and the cell to change all within a macro and connecting that to a button. This way a user needs only enter the target value and press the macro button.
- Batch Calculations – The idea here is to cycle through a table of input values with one set of inputs per row, perform some calculation using that one row of inputs, and save off the results before calculating for the next line of inputs. A simple example would be for a home builder when there is a table of input data eg, one row per building site, to include start date, end date, average price per house, costs etc. The model is required to calculate the results of all the sites and store them somewhere. Rather than duplicating calculation blocks for all sites (which could lead to an overly large file) it may be most efficient to have a single calculation sheet which can calculate one site at a time. The sheet would be set up to have a control cell into which a specific site code is entered. The sheet would look that site code into the input data table, and call up the corresponding values for that site, and then compute the financial results of that one site. A macro can be used to cycle through each of the possible site codes one at a time and store the results of each site into an output table which can be summarised later in the model.
- Freeze Scenarios (for when a data table is not enough) – Similar to the batch calculations method above, a macro could cycle through each scenario by changing the value in the control cell, and calculate the results of the model (eg, certain KPIs from the dashboard) and then store them in a table or series of tables so that the scenarios results can be compared.
In conclusion my advice would be: Keep it simple! In all four cases above I'm using the macro to do something which a human could do – eg, change an input cell and then copy/paste some values, then go to the next possible input cell value. By using the macro to do the minimum the model will be easier to understand by the users, and easier to maintain for the author.
- Your Questions Answered #11 – Tips and Tricks Live extended
- Your Questions Answered #10 – Tips and Tricks Live extended
- Your Questions Answered #9 – What is the purpose of # when using dynamic arrays?
- Your Questions Answered #8 – Does Power BI leave the underlying data source unchanged?
- Your Questions Answered #7 - How do I combine tables from multiple files into a single table?
Archive and Knowledge Base
This archive of Excel Community content from the ION platform will allow you to read the content of the articles but the functionality on the pages is limited. The ION search box, tags and navigation buttons on the archived pages will not work. Pages will load more slowly than a live website. You may be able to follow links to other articles but if this does not work, please return to the archive search. You can also search our Knowledge Base for access to all articles, new and archived, organised by topic.