Skip to main content

We’re currently upgrading our website. You may notice that some sections look different as we carry out this work. The search facility will also be unavailable during this time.

Use of artificial intelligence (AI) at TPR

FOI reference:  FOI-398

Date: 17 July 2025


Request

"I am writing to request information on usage of artificial intelligence (AI) within TPR. Specifically, I would like to know:

a) How is TPR utilising AI to deliver its key aims as a regulator and also to help with the government's aim of boosting growth and the economy?

b) What risks has TPR identified in its usage of AI, and how is it mitigating against these?

c) Is TPR using any LLMs for its activities (such as ChatGPT or Microsoft CoPilot), and how is it managing the risk of using these?

d) Is TPR using AI, LLMs or automation in any areas not mentioned above, and how is it managing the risk of using these?"

On 28 August 2025, we requested further clarification regarding question d). Your response on 2 October 2025 clarified the below:

“I am looking for information about any automation involving AI, such as using AI workflows, chatbots, administration processes, data collection, generating content. It would be great to also have some examples of these please. And for the other questions, I would like the AI/LLMs to be referred to jointly…"

Response

I confirm that we hold some of the information you have requested.

Point A

I can confirm that we do not hold information you have requested in point A. That is because at present we do not record the use of AI systems in based on the specific criteria outlined in your request.

Point B

TPR has identified a number of general risks associated with the use (and non-use) of AI technologies. These risks fall into five key categories, which are outlined below:

  1. Adoption of AI without clear guardrails:
    • Inappropriate data handling.
    • Regulatory violations.
    • Financial and reputational impacts.
    • Legal and ethical impacts.
  1. Not adopting AI:
    • Disadvantage compared to other departments and partner organisations.
    • Inefficiencies.
    • Reputational risks.
  1. Lack of guidance on AI implementation in the pensions industry:
    • Market disparities.
    • Reduced value for savers.
    • Increased cybersecurity risks.
  1. High energy consumption and supply chain dependencies:
    • Environmental impact.
  1. Over-reliance on AI:
    • Erosion of internal capabilities.
    • Reduced capacity to address nuanced regulatory issues.

Given its rapidly developing nature, TPR remain at the early stages of assessing and implementing this technology. This includes, where possible, developing controls to mitigate the risks above.  

TPR is aligning to the Governments AI playbook and the principles outlined within it. 

Point C

TPR has authorised the limited and controlled use of the following LLM’s to support some of its activities:

Microsoft 365 CoPilot and CoPilot chat (LLM)

Risks area

Mitigation measures

Access control

  • Access review procedure developed and to be implanted to minimise the risk of inappropriate access
  • Guidance provided on what to do if presented with something unexpected

Classification and labelling

  • User guidance updated to reflect need to review source material
  • Add AI policy to the Terms of Service section of Azure
  • Training session provided and route for feedback clarified

Governance

  • Action taken by AI Specialist to escalate need for responsibility to be clear assigned and processes developed
  • Not high or critical risk during trial period but will require resolution in the near term prior to wider roll out (Risk to be added to Target for tracking)
  • Data Governance team to update AI policy to reflect trial and learnings

Configuration

  • Configurations to be managed by Tech team and, as agreed with AI specialist, grounding to be disabled (reducing risk of data moving outside of TPR estate).

Potential for discrimination

  • Data to explore likelihood of risk and manage with People team support as appropriate

False positives from purview flags. No calibration of Purview Sensitive Information Types (SITs)

  • Purpose based - validity rating of information

Derived data risk

  • Systematic access control
  • Sensitivity/role based-use case
  • Labelling

Propagated errors in code

  • Training issue - understanding how to apply prompts and caution around use of incorrect data and hallucinations

Password exposure

  • Implement a password keeper

You should note we are currently trialling the use of M365 CoPilot with a small subset of our workforce to further develop our understanding of the risks and opportunities the tools present.

Point D

TPR has authorised the limited and controlled use of the following AI based tools (excluding LLM’s which are listed in point C, to support some of its activities:

Tool name

Risks identified

Mitigation measures

Machine Learning – “AutoCal”

N/A

N/A

Natural Language Processing – “Web Feedback”

N/A

N/A

Natural Language Processing – “Event Scanning Tool”

N/A

N/A

Natural Language Processing – “Scam Website Detector”

N/A

N/A

Machine Learning – “LDI Resilience”

N/A

N/A

Natural Language Processing – “CSAT Thematic Coding Model”

N/A

N/A

Please note the nature of these AI based applications were not considered to present any significant risks which are unique to the tool (ie outside of the general risk in point B). Therefore, no risks or associated mitigations are listed.

Is this page useful?

Thanks for your feedback.

Page not useful?

Problems with this page?

Your email address will only be used to reply to your comment. Read our privacy notice.