Founding Research Engineer
Founding Research Engineer

Founding Research Engineer

Full-Time 80000 - 100000 £ / year (est.) No home office possible
Go Premium
6

At a Glance

  • Tasks: Join us in building groundbreaking AI models and tackle complex research challenges.
  • Company: Be part of a pioneering tech company at the forefront of AI innovation.
  • Benefits: Enjoy private healthcare, wellness perks, and exciting team socials.
  • Other info: Access exclusive networking opportunities and events with industry leaders.
  • Why this job: Make a real impact in AI while collaborating with top talent in a vibrant community.
  • Qualifications: Experience in knowledge graphs, Rust, and a passion for dynamic data systems.

The predicted salary is between 80000 - 100000 £ per year.

What we're building Frontier models now score above 170 on IQ tests. Reasoning is no longer the constraint on enterprise AI. Context is. The context layer sits between an enterprise's siloed data and the agents that need to act on it. Stuff the context window and you trade quality for cost and latency. Use naive RAG and retrieval breaks the moment the question gets interesting. Stand up a vanilla knowledge graph and you hit the harder problem underneath: someone has to design the ontology, and at enterprise scale (hundreds of thousands of files, hundreds of gigabytes) no human can. This is what gates almost every enterprise AI deployment we've seen. 60x solves it. We've built AI Brain, a knowledge graph platform engineered backwards from the agentic retrieval problem. The thesis is dynamic ontology generation: the graph schema isn't authored by a user, it's generated by a multi-agent ingestion pipeline from the business logic of the data itself, and continuously enriched with secondary and tertiary derivatives. Pre-digested analysis lives in the graph so retrieval is a lookup, not a reasoning loop. We operate a Palantir model for workflows. Platform sits at the centre. Forward-deployed engineers wrap it around enterprise workflows we've already templated. Customisations get retained as IP and feed back into the platform. Same flywheel shape as Palantir, different domain. We work with enterprises across multiple sectors, and a growing list of global consultancies are evaluating us against their internal GPT deployments. In the last two weeks we shipped a redesigned ingestion pipeline, primary entity extraction with auto-enrichment, and an end-to-end demo across 500 companies. That pace is the default.

This is a founding role. The parts of the platform you'll work on are the parts that decide whether the thesis holds.

The role

You'll work on the research-grade core of AI Brain alongside the CTO (exited robotics founder) and the senior engineering team. The open problems on the desk:

  • Dynamic ontology generation: The graph schema is generated, not authored. Structure emerges from the business logic of the data, with analytical insight pre-computed and stored rather than recomputed on every query. Open work: Hierarchical ontology, moving from a flat conceptual space to one with inheritance, without breaking source provenance. Per-tenant configuration that a forward-deployed engineer can tune without touching the runtime. The eval question underneath all of it: how do we measure whether a generated ontology is good?
  • Primary entity consolidation: When a single real-world entity (a company, a person, a product) appears across hundreds of documents under different names, the graph has to recognise it as one thing. We do this through a multi-stage consolidation pipeline that combines fuzzy matching, heuristics, and agent-driven tiebreaking against authoritative external sources where the domain demands it. Provenance back to the source is preserved end-to-end. Open work: Edge-case dedup where the same entity appears under different names in different contexts. The right boundary between consolidation, enrichment, and update as separable concerns. Determining attributes at the entity level rather than re-deriving them per chunk.
  • Our own temporal graph database (Rust): Existing graph stores don't carry the temporal model we need, so we're building our own in Rust. Time becomes a property of every node, edge, and attribute, and any retrieval can be run as of any point in history. The commercial story this opens up: a graph that doesn't only produce decisions today, but backtests its own reasoning against historical state to prove the system would have caught the right answers when it mattered. That's what justifies the platform license, and it isn't feasible on the existing stack without compromising the model. You'll be central to the design and build of the replacement. This is the deepest research-and-systems problem on the roadmap and the most consequential piece of IP we'll ship in the next twelve months.
  • Benchmark and white paper: Existing large-context retrieval benchmarks are saturated. Frontier models score 100%, which means they no longer differentiate between systems that are good at enterprise retrieval and systems that aren't. We need a new one. Designing it, running it, and publishing the white paper is on the roadmap. Releasing the benchmark itself, separately from our results on it, is part of the strategy.
  • Frontier work we’ll explore soon: Open ideas from research conversations: alternative embedding geometries for deep hierarchies, community-detection approaches to retrieval, graph-internal continuous-monitoring patterns as an alternative to scheduled jobs, encoder-based privacy primitives that would unblock several enterprise sales cycles. You'll have a hand in picking what we commit to.

You'll also contribute to hiring, technical input on client engagements where it matters, and white paper authorship.

Our stack

  • Agents: LangGraph with Pydantic-typed state, Claude via Vertex AI, Gemini for fast tagging
  • Graph and data: Postgres plus Apache AGE today, with a Rust temporal graph database in active development as the long-term replacement
  • Backend: FastAPI, Python 3.12, Pydantic everywhere
  • Frontend: Next.js (App Router), TypeScript, Tailwind, shadcn, Vercel
  • Infra: GCP across compute, storage, model serving, and key management
  • Tooling: pnpm, Husky commit hooks (lint, format, typecheck, test, agentic check), Linear, Claude Code as a daily driver

We're opinionated about code quality and we use AI coding agents hard. Founding-team velocity assumes it.

What we're looking for

  • Depth in at least one of: knowledge graphs and GraphRAG, retrieval systems, agent orchestration, or large-scale data ingestion
  • A track record of taking research papers or first-principles thinking through to working production systems. Published work, open-source contributions, or systems you can walk us through architecturally all count.
  • Rust experience. We don't expect you to have shipped a graph DB before, but we do expect you to have written real Rust on a systems-level problem (parsers, runtimes, storage engines, performance-critical services) and to have a view on where Rust earns its complexity and where it doesn't.
  • Strong Python, and enough TypeScript to ship product surface where it matters
  • The instinct to read someone else's PR, see three things to improve, and write the comment kindly
  • Taste. You can tell a clever solution from the right solution, and you'll push back on us when we conflate them.
  • Excitement about this problem space, not AI in the abstract: the context layer, GraphRAG, dynamic ontology generation, temporal data systems
  • You don't need a PhD. You do need to operate at that level on the problems we care about.

Beyond the role

The community: 60x sits at the centre of Unicorn Mafia, the invite-only builder community we run. Around 1,100 members, tightening: maths olympiad winners, hackathon regulars, and founders across London, San Francisco, New York, and Europe. The network gives you durable career capital. Most of our team meets future co-founders through it, and several alumni have spun out their own companies with the network already in place. Day one, you're in. Events are free. International trips are paid for. NY trips, hackathon weekends. Rooftop parties where the other guests are co-founders of major AI companies. We hand-pick who sits in the office to keep talent density high. Engineers from outside 60x come in because the room is worth being in. Most companies fly people in to get access to a room like this. Yours is at your desk.

The lifestyle: We look after the team and we socialise together. Private healthcare and a wider wellness benefits package. Sauna and cold plunge sessions for recovery and team time. Team socials, dinners, off-sites, and the overflow from UM events. An environment for people who want to do the best work of their lives without burning out.

Founding Research Engineer employer: 60x

At 60x, we pride ourselves on being an exceptional employer, offering a dynamic work culture that fosters innovation and collaboration. As a Founding Research Engineer, you'll be at the forefront of cutting-edge AI technology, working alongside industry leaders in a vibrant community that values personal growth and professional development. With benefits like private healthcare, wellness packages, and access to exclusive networking events, you'll thrive in an environment designed for high achievers who want to make a meaningful impact.
6

Contact Detail:

60x Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Founding Research Engineer

✨Tip Number 1

Network like a pro! Get out there and connect with people in the industry. Attend meetups, conferences, or even online webinars. You never know who might be looking for someone just like you!

✨Tip Number 2

Show off your skills! Create a portfolio or GitHub repository showcasing your projects. This is your chance to demonstrate your expertise in knowledge graphs, dynamic ontology generation, or whatever floats your boat.

✨Tip Number 3

Prepare for interviews by diving deep into the company’s tech stack and recent projects. Be ready to discuss how your experience aligns with their needs, especially around AI Brain and enterprise workflows.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in being part of our team.

We think you need these skills to ace Founding Research Engineer

Knowledge Graphs
Graph Retrieval Systems
Dynamic Ontology Generation
Data Ingestion
Rust Programming
Python Programming
TypeScript
Analytical Insight
Entity Recognition
Fuzzy Matching
Heuristics
Benchmark Design
Research Paper Publication
Systems-Level Problem Solving
Collaboration and Communication

Some tips for your application 🫡

Show Your Passion: When you're writing your application, let your excitement for the role shine through! We want to see that you’re genuinely interested in the challenges we’re tackling, especially around dynamic ontology generation and knowledge graphs.

Tailor Your CV: Make sure your CV is tailored to highlight your relevant experience. Focus on projects or roles where you've tackled similar problems, like retrieval systems or large-scale data ingestion. We love seeing how your background aligns with what we're building!

Be Clear and Concise: Keep your application clear and to the point. We appreciate well-structured responses that get straight to the heart of your experience and skills. Avoid fluff – we want to know what you can bring to the table!

Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to keep track of your application and ensure it gets the attention it deserves. Plus, it shows you’re serious about joining our team!

How to prepare for a job interview at 60x

✨Know Your Stuff

Make sure you have a solid understanding of knowledge graphs, dynamic ontology generation, and the specific technologies mentioned in the job description. Brush up on Rust and Python, and be ready to discuss your past projects that relate to these areas.

✨Show Your Problem-Solving Skills

Prepare to tackle open-ended questions about the challenges mentioned in the role. Think through how you would approach dynamic ontology generation or entity consolidation, and be ready to share your thought process and any relevant experiences.

✨Engage with the Team

This is a founding role, so demonstrate your excitement about working closely with the CTO and senior engineers. Ask insightful questions about their current projects and express your interest in contributing to the team’s goals and culture.

✨Highlight Your Research Experience

If you've published papers or contributed to open-source projects, make sure to bring them up. Discuss how your research has informed your practical work, especially in relation to enterprise AI and retrieval systems, as this will show your depth of knowledge.

Founding Research Engineer
60x
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>