At a Glance
- Tasks: Own the data path and build a robust ingestion pipeline for health data.
- Company: Join Terra, a leader in health data connectivity and innovation.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Other info: Dynamic team culture with a focus on innovation and accountability.
- Why this job: Make a real impact on health data delivery and scalability.
- Qualifications: Experience in platform engineering and a passion for data systems.
The predicted salary is between 70000 - 90000 ÂŁ per year.
About The Role
Terra is the connective tissue between the world’s health data and the developers building on top of it. We ingest from 500+ upstream sources, normalize everything into a single schema, and deliver 8B+ events/year to 1,000+ developers and AI labs via webhooks, SQL, cloud storage, and queues. The platform is the product. We’re building the upstream supplier connector—a new ingestion layer for data suppliers to push directly into Terra. AI labs are consuming at rates we didn’t plan for. The pipe needs to get wider, faster, and more reliable. This role exists because the platform needs a dedicated owner. Someone who thinks end‑to‑end—from a Garmin syncing at 3 a.m. to a webhook landing 400 ms later. You’re not building features. You’re building the foundation.
What You’ll Own
- The full data path from source to destination.
- Ingestion Pipeline—How provider data enters Terra from cloud APIs (Garmin, Fitbit, Oura) and mobile SDKs (Apple Health, Health Connect). Raw data in, queued for normalization.
- Normalization Engine—Transforms heterogeneous provider payloads into Terra’s standardized models: Activity, Sleep, Daily, Body, Menstruation, Nutrition. The core IP that makes 500+ sources feel like one.
- Event Delivery—Webhooks, Postgres/MySQL, Supabase, S3, SQS, Kafka. At‑least‑once delivery, ordering, retries, dead‑letter queues. Data reaches destinations reliably.
- Provider Framework—How new Sources get onboarded. Web‑based (OAuth, polling, rate limiting) and mobile‑only (on‑device SDK, background sync). Adding provider #501 should be as fast as #5.
- Auth & Connections—Connect widget, custom UI flows, OAuth sessions, Terra User lifecycle, Reference ID mapping.
- Upstream Supplier Connector—New ingestion surface. Data suppliers push into Terra instead of us pulling. Ground‑up build.
- API Versioning—Backwards compatibility and deprecation across the /v2 surface.
- Observability—Monitoring and alerting across the full pipeline. You know when a provider degrades before our developers do.
What You’ll Build
- Scale event delivery to 10x—AI lab consumption is growing exponentially. Redesign the hot path without proportional cost increase.
- Ship the upstream supplier connector—Schema contracts, auth, validation, rate limiting. The interface for suppliers to push directly into Terra.
- Redesign normalization for schema evolution—Providers change APIs. Our models evolve. Downstream consumers can’t break.
- Multi‑destination fan‑out—One event, reliably delivered to webhooks, SQL, S3, and queues simultaneously. Independent retry logic per destination.
- Provider health dashboards—Real‑time visibility into every Source’s freshness, latency, error rates, and schema drift.
- Harden mobile SDK data path—Apple Health and Health Connect are fundamentally different from cloud APIs. On‑device processing, background sync constraints, OS‑level limits. Make it scale.
Who You Are
- Systems thinker — you see data flow and failure modes before you see features.
- Battle‑tested at scale — you've operated platforms processing billions of events. You've fixed production at 2 a.m.
- Observability‑first — monitoring and tracing are part of your design, not bolted on after.
- Opinionated where it matters — event delivery guarantees, schema evolution, idempotency, API versioning. Opinions earned by shipping.
- Fast — architecture whiteboard Monday, production deploy Friday.
- Default yes — hard problems make you lean in.
- Accountable — you own production, you respond to incidents, you care about uptime. Skin in the game.
Big Plus
You are an athlete. You train, you compete, you push limits — or at the very least, you are obsessed with quantifying your own data. The discipline, ambition, and courage it takes to show up every day and get better is the same energy we run on. If you understand the data because you live it, you’ll build a better product.
Senior Platform Engineer in London employer: Terra API
Contact Detail:
Terra API Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Platform Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects and contributions. This is your chance to demonstrate your expertise in building robust platforms and handling data flows.
✨Tip Number 3
Prepare for interviews by practising common technical questions and scenarios related to platform engineering. Think about how you would tackle real-world problems, like scaling event delivery or ensuring data integrity.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in being part of our team at Terra.
We think you need these skills to ace Senior Platform Engineer in London
Some tips for your application 🫡
Show Your Passion for Data: When you're writing your application, let us see your enthusiasm for data and technology. Share any personal projects or experiences that highlight your love for working with data, especially if they relate to health or fitness!
Tailor Your Application: Make sure to customise your application to reflect the specific requirements of the Senior Platform Engineer role. Highlight your experience with ingestion pipelines, event delivery, and any relevant technologies like SQL or Kafka. We want to see how you fit into our world!
Be Clear and Concise: Keep your application straightforward and to the point. Use clear language to describe your past experiences and achievements. We appreciate brevity, but don’t forget to include enough detail to showcase your skills and impact!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re serious about joining our team at StudySmarter!
How to prepare for a job interview at Terra API
✨Understand the Data Flow
Before your interview, make sure you grasp how data flows through platforms like Terra. Familiarise yourself with ingestion pipelines and normalisation processes. This will help you articulate your understanding of the role and demonstrate that you're a systems thinker.
✨Showcase Your Experience with Scale
Prepare to discuss your past experiences operating platforms that handle billions of events. Be ready to share specific examples of challenges you've faced and how you resolved them, especially in high-pressure situations. This will highlight your battle-tested skills.
✨Emphasise Observability in Design
Since observability is crucial for this role, come equipped with insights on how you've integrated monitoring and tracing into your previous projects. Discuss how you ensure reliability and performance, showing that you think about these aspects from the start.
✨Connect Personally with the Product
If you're an athlete or have a passion for quantifying data, share that connection during your interview. Explain how your personal experiences can inform your work at Terra. This not only shows your enthusiasm but also aligns with their culture of pushing limits.