We seem to have learned to live with spreadsheets, despite them giving us the wrong answer 90% of the time. Are there any lessons to be learnt from our ability to adapt to fallible spreadsheets that can help us deal with online misinformation?
Unexpected
Anyway, the substance of the argument related to an article I had written with the rather lurid title: “It’s not the catastrophic, career threatening, errors in Excel spreadsheets that really worry me, it’s Frankenstein…” In the article, I had tried to examine the apparent paradox between the oft-quoted statistic that 90% of all known spreadsheets contain material errors, and the fact that the use of spreadsheets remains so prevalent in business. My proposed explanation was that spreadsheets were often just part of a combined process that stitched together technological application and human. So, although a spreadsheet might contain one or more material errors, because its results are subject to interpretation by someone who is aware of its limitations, many errors do not end up adversely affecting the decisions that are based, in part at least, on a spreadsheet. Of course, the danger comes when spreadsheets are relied upon directly without this element of human interpretation. This could also help explain why so many spreadsheet errors come to the fore when their principal users have left an organisation or even just gone on holiday.
Do we need more misinformation?
To return to the main subject of the post, it is possible that there is a parallel between addressing the apparently ubiquitous errors in spreadsheets and countering misinformation on social media. Recent, and not so recent, events have demonstrated the potential influence of the misinformation spread on social media. The natural response of many governments has been to demand more moderation of content and issue threats of prosecution. The worry is that neither of these approaches seems likely to solve the problem. Given the current business model of many of the social media platforms, it’s a bit like politely asking drug dealers to make their drugs less addictive. Even with the will and massively increased resources, moderation is always likely to be imperfect and too late. The situation is bad enough as it is but could be even worse if the unimaginable happened and one or more of the platforms fell into the hands of a hostile nation state or an individual with their own political agenda.
If moderation really is doomed to fail, then pretending that it could work might only make the situation worse. Users could end up placing more reliance on social media content because they assume that there is some sort of adequate monitoring and moderation. The answer could be to make social media content seem less, rather than more, reliable. Instead of increasing people’s belief in social media by holding out the false hope of effective moderation, governments could take the opposite approach and seek to make social media content more obviously unreliable. While holding the platforms responsible for any criminality caused or amplified by their content, maybe governments should publicly proclaim the impossibility of moderation and try to increase the general awareness of the unreliability of social media content.
Much has been said about the dangers of AI amplifying the quantity of false content and making it more convincing and more targeted. There is a chance that it will have the opposite effect, and that the sheer volume, and apparent realism, of the AI misinformation will destroy the credibility of all social media content.
We have had about 40 years to learn to live with spreadsheets, despite 90% of them seeming to have material errors. By recognising their potential failings, with help from organisations like EuSpRIG, we have evolved our approach to become more sceptical about spreadsheet results and found a way to continue to make them useful and, in many cases, seemingly irreplaceable. This is not to suggest that serious spreadsheet-related errors don’t still occur – reading through the EuSpRIG spreadsheet ‘horror stories’ shows just how dangerous they can be.
Social media is a comparatively recent introduction, and it is possible that speeding up our evolution towards heightened scepticism might be a more effective tactic to combat the pernicious effects of misinformation than a doomed attempt to persuade the social media companies to improve the reliability of the content they disseminate.
As I mentioned in the ‘Frankenstein’ article, if something we based important decisions on proved to be misleading 90% of the time, the theory of evolution would ensure that we eventually stopped relying on it. Perhaps the same logic applies to social media.
Archive and Knowledge Base
This archive of Excel Community content from the ION platform will allow you to read the content of the articles but the functionality on the pages is limited. The ION search box, tags and navigation buttons on the archived pages will not work. Pages will load more slowly than a live website. You may be able to follow links to other articles but if this does not work, please return to the archive search. You can also search our Knowledge Base for access to all articles, new and archived, organised by topic.