Leave us your email address and we'll send you all the new jobs according to your preferences.

Cybersecurity Researcher

Posted 2 days 6 hours ago by AI Security Institute

£65,000 - £145,000 Annual
Permanent
Full Time
Research Jobs
London, United Kingdom
Job Description
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.

We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

About the AI Security Institute The UK AI Security Institute is the world's largest and best-funded team dedicated to understanding the capabilities and impacts of advanced AI and developing practical risk mitigations. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.

About the Team The Cyber and Autonomous Systems Team (CAST) is looking to research and map the evolving frontier of AI capabilities and propensities to inform critical security decisions that reduce loss-of-control risks from frontier AI. We focus on preventing harms from high impact cybersecurity capabilities and highly capable autonomous AI systems.

About the Role This is a cybersecurity engineer position focused on building environments and challenges to benchmark the cyber capabilities of AI systems. You'll design cyber ranges, CTF style tasks, and evaluation infrastructure that allows us to rigorously measure how well frontier AI models perform on real world cybersecurity tasks.

This work belongs inside UK government because understanding AI cyber capabilities is critical to national security, and robust empirical testing requires coordination across government, industry, and international partners to inform policy decisions on AI safety.

You'll work closely with research engineers, infrastructure engineers, and machine learning researchers across AISI. As a small, fast moving team building first of its kind evaluation infrastructure, you'll be able to influence research directions, own whole pieces of work, and bring your ideas to the table.

Core Responsibilities
  • Evaluation Design & Development (60%)
    • Design cyber ranges and CTF style challenges for automatically grading AI system performance on cybersecurity tasks
    • Build agentic scaffolding to evaluate frontier models, equipping them with tools such as network packet capture utilities, penetration testing frameworks, and reverse engineering/disassembly tools
    • Design metrics and interpret results of cyber capability evaluations
  • Work alongside other engineers to ensure evaluation environments are robust and scalable
  • Research & Communication (10%)
    • Write reports, research papers and blog posts to share findings with stakeholders
    • Keep up-to-date with related research taking place in other organisations
    • Contribute to AISI's broader understanding of AI cyber risks
Example Projects
  • Onboard and integrate new cyber ranges into our evaluation pipeline
  • Conduct agent research to improve the cyber capabilities of our agents
  • Improve grading and scoring methodologies for automated evaluation tasks
  • Integrate defensive telemetry and simulated users into ranges to increase their realism
  • Collaborate with government partners on joint research publications
Impact Your work will directly shape the UK government's understanding of AI cyber capabilities, inform safety standards for frontier AI systems, and contribute to the global effort to develop rigorous evaluation methodologies. The evaluations you build will help determine how advanced AI systems are assessed before deployment.

What we are looking for We're flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below.
  • Strong Python skills with experience writing scripts for automation or security tooling
  • Proven experience in at least one of the following areas of cybersecurity red team ing:
    • Penetration testing
    • Cyber range design
    • Competing in or designing CTFs
    • Developing automated security testing tools
    • Bug bounties, vulnerability research, or exploit discovery and patching
  • Strong interest in helping improve the safety of AI systems
Preferred
  • Familiarity with virtualisation technologies such as Proxmox VE and infrastructure as code approaches to enable reproducible test environments to be rapidly spun up for testing
  • Ability to communicate the outcomes of cybersecurity research to a range of technical and non technical audiences
  • Familiarity with cybersecurity tools such as network packet capture utilities, penetration testing frameworks, and reverse engineering/disassembly tools
  • Active in the cybersecurity community with a track record of keeping up to date with new research
  • Previous experience building or measuring the impact of automation tools on cyber red team ing workflows
Example backgrounds
  • Penetration tester with 1+ years experience; has designed CTF challenges or cyber ranges; strong Python skills; interested in AI safety
  • Content engineer at a cybersecurity training platform; experienced in building vulnerable machines, CTF challenges, and automated deployment infrastructure
  • Security researcher with experience in vulnerability research or bug bounties; familiar with penetration testing frameworks and reverse engineering tools; has communicated findings to mixed audiences
Core requirements
  • This is a full time role.
  • You should be able to join us for at least 24 months.
  • You should be able work from our office in London (Whitehall) for several days each week, but we provide flexibility for remote work.
  • We would like candidates to be able to start in Q2 2026.
What We Offer Impact you couldn't have anywhere else
  • Incredibly talented, mission driven and supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Minister's AI Advisor and leading AI companies.
  • Opportunity to shape the first & best resourced public interest research team focused on AI security.
Resources & Access
  • Pre release access to multiple frontier models and ample compute.
  • Extensive operational support so you can focus on research and ship quickly.
  • Work with experts across national security, policy, AI research and adjacent sciences.
  • If you're talented and driven, you'll own important problems early.
  • 5 days off learning and development, annual stipends for learning and development and funding for conferences and external collaborations.
  • Freedom to pursue research bets without product pressure.
  • Opportunities to publish and collaborate externally.
Life & Family
  • Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
  • Hybrid working, flexibility for occasional remote work abroad and stipends for work from home equipment.
  • At least 25 days' annual leave, 8 public holidays, extra team wide breaks and 3 days off for volunteering.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents + 3 extra paid weeks + option for additional unpaid time).
  • On top of your salary, we contribute 28.97% of your base salary to your pension.
  • Discounts and benefits for cycling to work, donations and retail/gyms.
These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.

Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

Selection Process In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.

The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross section of our team at AISI (including non technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior team here at AISI.

Candidates should expect to go through some or all of the following stages once an application has been submitted:
  • Initial interview
  • Technical take home test
  • Second interview and review of take home test
  • Third interview
  • Final interview with members of the senior team
Additional Information The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed . click apply for full job details
Email this Job