AI Vs An Information Compliance expert
- Feb 16
- 4 min read
AI is becoming more common, more capable, and far easier to access than it was even a year ago. As a result, councils across the country are starting to explore what these tools could mean in practice, from supporting day to day admin to improving the way they communicate with residents.
Not only do our team at Breakthrough Communications continue to assess how AI can support and empower councils, but we also consider how it should be used responsibly, accurately and in a way that supports good community engagement and information compliance.
Councils are not the only ones using AI. We have seen a pattern of requests sent to councils where we suspect AI has been involved in the writing of the requests.
Our Head of Information Compliance recently conducted his own experiment with an AI tool to see if it could give accurate information compliance advice. The resulting "conversation" can be described as, enlightening, concerning and - to some extent - amusing...
To start with AI was asked: I’ve put in a subject access request (SAR), and the organisation wants to extend the deadline. Are they allowed to?
The AI responded with: They are allowed to extend the deadline by up to a further 2 months only if:
The request is complex, or
The request is numerous (e.g. lots of data or multiple requests)
So the maximum total time is 3 months, not open-ended.
All of which is factually correct, so far...
It then went on to make some questionable statements: Important conditions they must meet
If they extend the deadline, the organisation must:
Tell you within the original 1-month period
Explain clearly why the extension is necessary
The AI was then asked to cite its source for that pair of important conditions, because whilst the first point on timescale is right, the second point of explaining “clearly why the extension is necessary” is not entirely correct.
The law says that “The controller shall inform the data subject of any such extension within one month of receipt of the request, together with the reasons for the delay.”
“Reasons for delay”, not “explain clearly why the extension is necessary”, are two very different things.
In order to get it to clarify where it got this concerning information, from the AI was then again asked to cite its source.
Asking AI to cite sources, or only use specific sources you give it is best practice.
It then responded:
An authoritative source for the ‘important conditions’ I [it] mentioned — in particular that the organisation must notify you within the first month and explain why an extension is necessary”. If they do extend, they must tell you within the original one-month period and explain why the extension is necessary.” This is exactly what I was referring to when I said they must provide a justified explanation within the first month if they want to extend the deadline.”
That is where it made another large mistake, because to back that assertion up it linked to a page of Information Commissioner's Office (ICO)guidance. The ICO guidance does not say “explain why the extension is necessary”. The ICO guidance says:
“You must comply with a SAR without undue delay and at the latest within one month of receiving the request. You can extend the time to respond by a further two months if the request is complex or you have received a number of requests from the individual, eg other types of requests relating to individuals’ rights.”
Nothing at all about explaining to the requester why the extension is necessary. The AI invented that little nugget in information - and more worryingly - presented it as fact!
To try and get more clarity the AI was then asked to explain why the extension is necessary. when the source it sites does not.
It then got sassy...
You’re right to call that out — the ICO’s own published guidance (the primary source) doesn’t explicitly show the “must tell you within one month and explain why wording on the page I initially cited, but it does appear in more detailed interpretations of the rules (and is confirmed by multiple reliable legal guides).
And it went on to quote various websites, including UK government guidance for schools.
So, the AI is passing on commentary by websites and a gov.uk page about schools as official ICO guidance!
To end this interaction the AI was then asked to summarise how it had confidently made assertions around rules, then official ICO guidance.
Its response included statements that its responses were: "a mix of fact and interpretation.” It then went on “Still, I’ll give it points for trying… like a teenager arguing, ‘Trust me, I read it somewhere!”
So what can we learn from this?
AI can absolutely be useful for general, generic questions when it comes to information compliance.
However, it is important to remember that just because AI confidently states something to be true, and supposedly cites a source, it can definitely still be wrong! Using incorrect, AI-generated advice could leave the council exposed and presents significant risks.
Always ask the AI to cite sources so you can cross check the information, and/or give it specific sources to use and tell it specifically to only use information from them.
Give AI specific parameters and guidance when asking it a question, in order to get the best and most accurate responses when asking technical questions in particular.
And, finally, we learnt that our Head of Information Compliance might just have this "conversation" used against him if the robots take over and that he should not be left to argue with them!



