The Risks of Relying on AI for Safety Advice: A Cautionary Tale About Overhead Line Clearances

I was pretty sure about the distances required for vertical clearances between overhead lines and goalposts at a work site this week, but I thought I’d double-check with ChatGPT — you know, just to feel comfortable, like getting a reassuring hand on the shoulder.

What I didn’t expect was to have my confidence undermined by a series of incorrect responses!

In the conversation that followed, I quickly realised that ChatGPT’s answers weren’t just off; they were potentially dangerous. It became clear that when it comes to safety-related matters, especially involving electricity, the confidence of the responses can lead to serious consequences.

So, I opened up my laptop and started typing into ChatGPT. Below is my actual conversation, spelling mistakes and all:

Hmmmmm I thought, I’m not sure that’s quite right – so I checked again:

Okay, that seems reasonable; an honest mistake, we all make them. So, I’ll set my goalposts to 1m, that’s pretty clear now – but maybe I’ll just check something else:

Hang on, so ChatGPT just ‘made up’ a distance of 1m clearance from the live OHL and passed it to me as the truth, that’s a bit worrying!

Still, it all looks okay now, and it’s given me some extra advice that I should have a 3m horizontal safety clearance and 6m vertical clearance, that’s handy, or is it:

With my confidence shot to pieces and a trip back to GS6 and TS43-8 on the cards, I thought I’d get ChatGPT to work out where it had gone wrong itself – and it is nothing if not honest when it comes to some self-reflection, here’s what it said:

1. Misunderstanding the Recommended Height for Goalposts

In the initial response, I suggested that the recommended height for goalposts under live overhead electricity cables in the UK should be the same as the minimum clearance height for vehicle access—5.2 metres. This statement was misleading for several reasons:

  • Misinterpretation of Regulations: The response incorrectly implied that the goalpost height could match the clearance height of the overhead line, which would not ensure any safety buffer. The minimum clearance of 5.2 metres is for the lowest point of the overhead line to allow vehicle passage, not the height for any structures underneath it. This misunderstanding arose from a lack of contextual awareness and a failure to differentiate between line clearance requirements and structural height limitations.
  • Algorithmic Limitation: The language model synthesised information based on general safety clearance data without accounting for specific practical applications, such as the installation of goalposts. This is a common limitation in algorithms that lack domain-specific knowledge and practical understanding of regulatory contexts.

2. Incorrect Application of a Hypothetical Example

In a subsequent response, I mentioned a hypothetical example of maintaining a clearance height where goalposts should be at least 1 metre below the line. This was not based on any official guidance but rather an attempt to convey a general principle of maintaining clearance:

  • Presentation of Hypothetical as Fact: The error here was presenting an illustrative example as if it were a prescribed regulation, which could easily be misinterpreted as an official standard.
  • Overconfidence in Information: This mistake was compounded by the confidence with which the information was presented. Algorithms like this often generate responses with a high degree of certainty, even when the information lacks accuracy or context. This can lead to users being misled if they are not already familiar with the subject matter.

3. Misrepresentation of GS6 Guidelines

The discussion also incorrectly characterised the GS6 guidelines, stating there was a 6-metre vertical clearance requirement, when in fact, GS6 specifies a horizontal clearance of 6 metres to avoid accidental contact with overhead lines:

  • Confusion Between Horizontal and Vertical Guidelines: This confusion arose from the algorithm’s failure to distinguish between different types of clearance — vertical versus horizontal. In safety-critical applications, such distinctions are vital, as they determine safe operating distances to prevent accidents.
  • Algorithmic Generalisation: The model generalised clearance distances without distinguishing the specific direction and context of these guidelines, demonstrating how language models may blur the nuanced details necessary for accurate application in specialised fields.

The Dangers of Misinformation in Safety-Critical Contexts

These inaccuracies illustrate a broader issue: the potential risks of relying on language models for safety-critical information. When dealing with subjects where human safety is at stake, inaccuracies and overgeneralisation can have severe consequences, potentially leading to life-threatening situations if acted upon without verification by a competent person.

  • Risk of Overconfidence: One of the inherent risks in using language models for technical or safety-critical advice is their tendency to deliver responses with a high level of apparent confidence, regardless of the underlying accuracy. This can be misleading, especially for users who may not have the expertise to critically assess the information provided.
  • The Need for Expert Review: These errors underscore the necessity for expert oversight and review when using language models for safety-related topics. While such models can be useful for general writing assistance, spelling and grammar checks, or proofreading, they should not be relied upon for decision-making in contexts where accuracy is paramount.

Language models like ChatGPT have their benefits. They’re incredibly useful for tasks like spelling and grammar checks, editing, and even helping to draft documents. These tools can streamline communication, make writing more efficient, and catch errors that might otherwise slip through.

However, my experience highlights the dangers of relying on AI for safety-critical advice.

I’m raising this issue because we’re seeing more evidence of language models being used in daily communications and even technical documents. While they have their place, they should never be seen as a replacement for a competent professional, especially in situations where accuracy and safety are paramount.

AI can provide a quick reference, but it’s no substitute for the knowledge and judgement that come from training and experience. In safety-critical fields, always ensure that information is reviewed and validated by an expert to avoid potentially dangerous mistakes.

After all, you wouldn’t trust a spellchecker to build a bridge, so why trust an AI with safety?