AI Safety & Adversarial Testing Contract - 3 months
AI Safety & Adversarial Testing Contract - 3 months

AI Safety & Adversarial Testing Contract - 3 months

Full-Time 500 - 1500 £ / month (est.) No home office possible
Go Premium
T

At a Glance

  • Tasks: Design and execute adversarial testing frameworks for AI models in high-risk environments.
  • Company: Join T3, a leader in AI safety and governance with major tech clients.
  • Benefits: Three-month contract with potential extension and hands-on experience in cutting-edge AI.
  • Why this job: Make a real impact on AI safety standards and work with top-tier technology companies.
  • Qualifications: Experience in AI safety, red teaming, and testing generative models required.
  • Other info: Dynamic role with opportunities to shape global AI standards and frameworks.

The predicted salary is between 500 - 1500 £ per month.

T3 partners with organizations deploying production AI systems in high-risk environments where failures can have significant regulatory, operational, or safety implications. With a team instrumental in shaping global AI standards and governance frameworks, T3 provides AI assurance services to major Big Tech companies and complex enterprises. This is a three month contract with the opportunity to extend.

Candidates must have direct experience working with frontier labs or large technology companies on safety evaluation, red teaming, or adversarial testing. Experience designing and operationalising testing frameworks for production grade generative models is essential.

Role Description

Support the design and execution of a structured adversarial testing framework across LLM, image, and video generation models. Responsible for developing the SOP, adversarial methodology, prompt expansion strategy, and delivery of formal testing reports aligned to client policy. It requires deep safety domain expertise combined with hands on testing capability. The role will report to a strategic lead.

Core Responsibilities
  • Adversarial Testing Framework Design
    • Define what constitutes truly adversarial prompts
    • Develop taxonomy of attack types and failure modes
    • Translate real user behaviour into structured attack vectors
    • Define evaluation methodology across LLM, image, and video models
    • Create severity and risk classification frameworks
  • Test Set Development & Expansion
    • Augment existing prompt libraries with adversarial variants
    • Identify blind spots in current test libraries
  • Evaluation & Execution
    • Classify and analyse failure types
    • Map outputs against internal policy requirements
    • Produce structured evaluation findings
    • Support reuse of client internal evaluation platform
  • Reporting & Stakeholder Communication
    • Produce formal testing reports
    • Present findings to technical and policy audiences
    • Clearly distinguish methodology from execution
    • Define remediation pathways and improvement loops
Required Skills & Experience
  • Technical
    • Strong background in AI safety, red teaming, or adversarial ML
    • Experience testing LLMs and generative models
    • Familiarity with prompt injection, jailbreaks, and boundary attacks
    • Understanding of multimodal models (text, image, video)
    • Knowledge of benchmark design and failure taxonomy creation
  • Domain Knowledge
    • Safety policy interpretation
  • Analytical
    • Ability to design rigorous testing methodology
    • Quantitative and qualitative evaluation skills
    • Ability to convert abstract risks into concrete test cases
  • Soft Skills
    • Comfortable operating in ambiguous scope environments
    • Clear communicator across technical and policy stakeholders
    • Able to work without creating single point dependency risk
    • Structured thinker
Ideal Background
  • AI safety researcher
  • Red teaming lead for generative AI systems
  • AI evaluation specialist
  • Experience in frontier or production generative systems
  • Experience with model benchmarking and structured evaluation labs

AI Safety & Adversarial Testing Contract - 3 months employer: T3

T3 is an exceptional employer, offering a dynamic work environment where innovation meets responsibility in the realm of AI safety. With a strong focus on employee growth and development, team members are encouraged to engage in meaningful projects that shape global AI standards while enjoying the flexibility of contract work. Located at the forefront of technology, T3 provides unique opportunities to collaborate with leading Big Tech companies, ensuring that your contributions have a significant impact on safety and regulatory frameworks.
T

Contact Detail:

T3 Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land AI Safety & Adversarial Testing Contract - 3 months

✨Tip Number 1

Network like a pro! Reach out to folks in the AI safety and adversarial testing space. Attend meetups, webinars, or even online forums. You never know who might have the inside scoop on job openings or can put in a good word for you.

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your experience with adversarial testing frameworks and any relevant projects. This is your chance to demonstrate your hands-on capabilities and make a lasting impression.

✨Tip Number 3

Prepare for interviews by brushing up on your knowledge of AI safety policies and testing methodologies. Be ready to discuss your past experiences and how they relate to the role. Confidence is key, so practice articulating your thoughts clearly!

✨Tip Number 4

Don’t forget to apply through our website! We’re always on the lookout for talented individuals like you. Make sure your application stands out by tailoring it to highlight your relevant experience in AI safety and adversarial testing.

We think you need these skills to ace AI Safety & Adversarial Testing Contract - 3 months

AI Safety
Adversarial Testing
Red Teaming
Generative Models
Testing Framework Design
Prompt Injection
Boundary Attacks
Multimodal Models
Benchmark Design
Failure Taxonomy Creation
Quantitative Evaluation Skills
Qualitative Evaluation Skills
Stakeholder Communication
Structured Thinking
Safety Policy Interpretation

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter to highlight your experience in AI safety and adversarial testing. We want to see how your background aligns with the role, so don’t hold back on showcasing relevant projects or achievements!

Showcase Your Technical Skills: When detailing your experience, focus on specific technical skills that relate to the job description. Mention any hands-on work with LLMs, generative models, or red teaming. We love seeing concrete examples of your expertise!

Be Clear and Concise: Keep your application straightforward and to the point. Use clear language to explain your methodologies and findings. We appreciate a structured approach, so make it easy for us to understand your thought process.

Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for the role. Plus, it helps us keep everything organised!

How to prepare for a job interview at T3

✨Know Your Adversarial Testing Inside Out

Make sure you brush up on your knowledge of adversarial testing frameworks, especially in relation to LLMs and generative models. Be ready to discuss specific methodologies you've used in the past and how they align with the role's requirements.

✨Showcase Your Technical Skills

Prepare to demonstrate your hands-on experience with safety evaluation and red teaming. Bring examples of past projects where you designed testing frameworks or conducted evaluations, and be ready to explain your thought process behind them.

✨Communicate Clearly and Confidently

Since this role involves presenting findings to both technical and policy audiences, practice articulating complex concepts in a straightforward manner. Think about how you can break down your methodologies and results for different stakeholders.

✨Be Ready for Scenario-Based Questions

Expect questions that put you in hypothetical situations related to adversarial testing. Prepare to think on your feet and demonstrate your structured thinking by outlining how you would approach various challenges in the role.

AI Safety & Adversarial Testing Contract - 3 months
T3
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>