AI-generated complaints: responding without losing control

client complaint, AI, artificial intelligence

A growing problem for solicitors

Complaints have always been part of legal practice, and there are clear rules and guidance on how they should be addressed and progresses.  Now, however, the very nature of those complaints received by firms is changing as a result of AI-generated complaints.

Over the years, solicitors have become used to receiving complaints that are angry, emotional or poorly expressed and have responded to them in a fair and proportionate way.  Now, however, they are receiving increasingly long, unfocused, vague and often aggressive complaints, many of which have been clearly generated using artificial intelligence tools and in many cases are not about a failing in service but are being used as a tool to encourage firms to reduce fees rather than to have to cope with the sheer length of the complaint.

These complaints can run to many pages, recycle generic regulatory language, make sweeping allegations with little factual connection to the retainer and, in some cases, even include the AI’s own prompts or drafting instructions that were never meant to be seen by the firm.

For many practices, this has become an operational and regulatory headache. Complaints teams are spending disproportionate amounts of time trying to decipher what is actually being complained about. Fee earners feel pressured to respond to every sentence for fear of regulatory criticism, and firms worry that any attempt to push back will be seen as dismissive or non-compliant.

Why AI-generated complaints are different

Traditional complaints, even when badly written or emotionally charged, usually contain a recognisable core. The client has experienced something, formed a view about it, and is trying to articulate that view. AI-generated complaints often lack that anchor. They tend to be generic, formulaic and detached from the specific facts of the matter.

They may cite regulatory provisions, cases or guidance that sound impressive but have little or no relevance. They may repeat the same allegation in multiple ways, contradict themselves, or mix service complaints, cost complaints and regulatory accusations into a single incoherent narrative. This makes them far harder to assess fairly and proportionately.

What are firms actually required to do?

None of this removes a firm’s obligation to take complaints seriously. However, it does raise an important question about proportionality. To what extent is it reasonable to expect solicitors to engage line-by-line with thousands of words of machine-generated text in order to extract potential issues?

The regulatory framework does not require unlimited effort. The SRA Code of Conduct requires firms to have an effective complaints process and to deal with complaints promptly and fairly. The Legal Ombudsman’s guidance focuses on understanding the complaint and helping clients clarify their concerns where necessary. The Provision of Services Regulations 2009 require best efforts to resolve disputes but expressly exclude vexatious complaints.

What none of these require is exhaustive engagement with incoherent, repetitive and sometimes abusive material.

Proportionality and fairness

Faced with an AI-generated complaint, firms often fall into one of two traps. Some over-engage, producing lengthy defensive responses that attempt to rebut every point. Others disengage entirely, risking criticism for not taking the complaint seriously.

A more defensible approach is structured and proportionate. Firms are entitled to distil a complaint into its substantive issues and respond to those issues rather than to every sentence. They are also entitled to explain that generic, repetitive or irrelevant material has not been addressed separately.

This approach aligns with regulatory expectations and protects firms from unnecessary exposure.

Asking for clarification is not obstruction

Where a complaint is genuinely unclear, firms are entitled to ask the complainant to clarify what they are actually complaining about before a meaningful investigation can take place. This is not obstruction. It is often the only way to ensure fairness on both sides.

Investigating a complaint without knowing what the complaint is risks misunderstanding the client’s concerns and creating avoidable regulatory risk. Timescales can legitimately be paused while clarification is sought, provided this is communicated clearly and recorded properly.

Dealing with fictitious authorities and “AI hallucinations”

AI-generated complaints frequently include fictitious cases, misquoted legislation or guidance taken wildly out of context. This can be intimidating, particularly for junior complaints handlers, but firms are not required to research or rebut hallucinated authorities.

The complaints process is not an academic exercise. The relevant question is whether the firm’s service fell below reasonable standards in the context of the retainer. Where authorities are clearly irrelevant or incorrect, it is sufficient to say so and to focus on the actual facts and professional obligations that apply.

Tone, aggression and setting boundaries

Many AI-generated complaints are unnecessarily aggressive or threatening, often reflecting the way the AI prompt was framed rather than the client’s true intentions. Firms are not required to tolerate abusive correspondence.

It is legitimate to set boundaries around tone, volume and repetition, and to explain that further correspondence must be focused and relevant if it is to receive a response. Doing so is not a breach of complaints obligations; it is part of managing the process safely and fairly.

The risk of over-engagement

Over-engaging with AI-generated complaints carries its own risks. Lengthy responses can legitimise irrelevant points, create new lines of dispute and increase the likelihood of escalation.

What regulators and ombudsmen look for is not volume, but reasonableness. Firms should be able to demonstrate that they identified the substantive issues, considered them properly, explained their conclusions clearly and signposted escalation routes where appropriate.

Financial pressure and tactical complaints

Firms are increasingly seeing AI-generated complaints used as a tactical tool, particularly to exert pressure for fee reductions or fee waivers. AI makes it easy to produce documents designed to overwhelm rather than to inform.

Motivation does not remove a firm’s obligation to consider genuine service issues, but nor does it require firms to allow the complaints process to be used indefinitely as leverage. Where matters have been addressed, boundaries can and should be set.

Vexatious complaints: substance over labels

The concept of vexatiousness should be used carefully. Regulators care far more about evidence and proportionality than labels. Firms should focus on behaviour, repetition, lack of substance and unreasonable demands, and record their reasoning neutrally.

A robust approach means setting limits, not being dismissive.

Data protection complaints and AI-generated content

AI-generated complaints often include generic references to GDPR breaches without identifying any specific data issue. The existence of the Data Use and Access Act 2025 does not change the basic principle that a complaint must identify an intelligible concern.

Firms should distinguish between service complaints and genuine data protection complaints, ask for clarification where needed, and focus on actual data processing rather than generic assertions. Spurious data allegations do not require exhaustive responses.

Documentation and defensibility

Internal documentation is critical. Firms should be able to evidence how complaints were assessed, why clarification was sought, how irrelevant material was handled and why the complaint was closed.

If challenged, the question will not be whether the firm responded to every word, but whether it acted reasonably, transparently and in good faith.

What firms should be doing now

Firms that manage AI-generated complaints best treat this as a systems issue rather than an individual frustration. They update complaints policies to allow summarisation of excessively long complaints, train staff to recognise AI-generated features, and use clear templates to request clarification and set boundaries.

Above all, they remain calm, structured and confident.

Final thoughts

AI-generated complaints are not going away. Firms that try to respond by endurance alone will struggle. Firms that respond with clarity, proportionality and confidence will be far better placed to protect themselves while treating clients fairly.

Complaints handling has never been about appeasing the loudest voice. It has always been about fairness, transparency and reasonableness. Those principles matter more than ever in an age where anyone can generate thousands of words at the click of a button.

Infolegal is assisting its subscribers in relation to this by providing them with guidance on how to treat such issues, how to make staff aware of the problems and, most importantly, how to respond to such complaints in a reasonable and measured way.  The Infolegal InfoHub, for example, contains draft responses and a method for recording vexatious complaints that adds to the firm’s transparency of processes.

Share on social media