AI, too good to be true when writing complex R&D reports?
Artificial Intelligence (AI) has become a permanent feature of today’s technological landscape, but unlike other tools that make our lives a little easier, AI has a mind of its own. As we come to rely more and more on this complicated and unpredictable assistant, it is essential that we understand what it can and can’t do. I asked ChatGPT40 what it thought about AIs writing factually accurate, technical reports:
“The potential for errors, lack of new research, privacy concerns, accountability issues, and biases means that human experts need to review and supplement AI-generated content.”
We recently viewed two reports, completed by someone else, which was evident from the outset it was written by AI technology. Not only was it untrue, but the projects in question were also predicting what the problems may be, rather than what they were!
The AI recommends getting experts to review the report, but most of us miss the other subtle messages lurking in this answer which is what makes working with artificial intelligence so difficult. Let’s hunt these invisible messages down.
As part of a gregarious and talkative species we rely on body language, tone of voice, writing style and other subliminal clues to attribute meaning to the words we hear and read. When it comes to highly technical and/or scientific uncertainties in an R&D tax credit report, we really need to visually view and assess most projects that are going to be written about to gain an accurate representation of the works involved. When the words are a little ambiguous, our brains instinctively fill in the details and apply corrections because, based on our shared experiences, we can interpret what the other person is trying to say.
When we talk to a machine, we naturally apply the same principles. Our minds interpret the professional style of an AI as competence on the part of the author, but there is no author. The confident delivery confirms the accuracy of each fact, yet the AI admits that it lies. Our brains become befuddled by the hypnotic cadence of the delivery, and we just know that our new friend is infinitely patient, deeply understanding and supremely intelligent, but it isn’t.
Therefore, the honest answer should be “No, you should not use AI to write complex R&D technical reports.”
Why not? …
Because they may be convincingly inaccurate. In a scientific article released in May 2024 an organisation that supports subsistence agriculture in Africa lamented how AI had been misleading farmers into planting crops at the wrong time while encouraging them to use toxic and ineffective pesticides. Famine was imminent.
The consequences of relying on artificially fabricated information can be unexpected and profound. The defendant in a recent tribunal case in the UK cited convincing legal precedent in a capital gains tax offence. The information was detailed and referenced real people and courts, but the tribunal noticed a small anomaly in the information, an AI had made it all up. In the judgement, deep concern was expressed for the potential reputational damage to courts, judges, and the legal system as a whole. The defendant claimed they had no idea that the information was wrong.
As the role of AI continues to expand in our daily lives, we need to be vigilant of the dangers hidden in AI’s seductive promises. ChatGPT40 says we “need to review and supplement AI-generated content.” I wonder how much training it took for the AI to learn how to convince us that, on the one hand we need to use AI-generated content, while on the other making it clear that any mistakes are our fault? That sounds suspiciously like a perfect business model, and we are the gullible consumers. Perhaps the next time you need a technical report, even AI admits that you should bring in the professionals.
Momentum, are experts in R&D Tax Credits, where technical and financial analysis are carried out by highly skilled, competent professionals… not bots!