At a Glance
- Tasks: Join us in building a verification layer for real-world claims using trusted datasets.
- Company: Innovative startup focused on truth and data integrity.
- Benefits: Competitive salary, flexible hours, and opportunities for professional growth.
- Why this job: Be part of the Verification Era and make a real impact with your skills.
- Qualifications: Strong Python and SQL skills, experience with data pipelines, and a passion for accuracy.
- Other info: Collaborative team environment with exciting projects in climate data.
The predicted salary is between 43200 - 72000 Β£ per year.
About The Project
We're building a "reference truth layer" that verifies real-world claims against trusted datasets. The goal is to take a claim from a report and return a structured verification.
What difference will you make?
You will have a pivotal role in taking to the market the Verification Era, where every claim can be verified and everyone can learn from truthful information.
What are we looking for?
- Must-have
- Strong Python backend engineering
- Solid SQL + schema design
- Experience building ingestion pipelines
- Comfort with time-series data
- Geospatial experience (PostGIS, geo indexing, distance matching)
- Familiarity with climate data formats (CSV/NetCDF) β optional for pilot
What will you be doing?
Responsibilities
- You will work with the founder and an ML/AI engineer to implement:
- Ground Truth Product Registry
- Define 3β5 climate truth products (datasets)
- Create a standard adapter interface for each product
- geo resolution (station/grid/country)
- time resolution (daily/monthly)
- provenance bundle (dataset version, retrieval timestamp, native IDs)
- variable
- geo_id (station_id/grid_id/admin_id)
- time window
- aggregation
- value + unit
- dataset_version
- Build a simple API endpoint that:
- accepts a parsed claim object (from our ML/AI engineer)
- retrieves matching truth rows deterministically
- performs unit conversion + tolerance comparison
- returns classification + matched truth + provenance
- Cache geo lookups (city β nearest stations / grid cells)
- Cache common conversions and time parsing outputs
Expected deliverables (what "success" looks like)
Senior Data in London employer: TiiQu
Contact Detail:
TiiQu Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Senior Data in London
β¨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with potential colleagues on LinkedIn. We all know that sometimes itβs not just what you know, but who you know that can help you land that dream job.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your Python projects, SQL queries, or any data pipelines you've built. We want to see what you can do, so make sure to highlight your best work when chatting with potential employers.
β¨Tip Number 3
Prepare for interviews by practising common technical questions related to backend engineering and data handling. We recommend doing mock interviews with friends or using online platforms to get comfortable with the format and types of questions you might face.
β¨Tip Number 4
Donβt forget to apply through our website! Itβs the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace Senior Data in London
Some tips for your application π«‘
Show Off Your Skills: Make sure to highlight your strong Python backend engineering and SQL skills in your application. We want to see how your experience aligns with our needs, so donβt hold back on showcasing your best projects!
Tailor Your Application: Take a moment to customise your application for the Senior Data role. Mention specific experiences that relate to building ingestion pipelines or working with time-series data. This shows us youβre genuinely interested in the position!
Be Clear and Concise: When writing your application, keep it clear and to the point. We appreciate straightforward communication, so avoid jargon unless itβs relevant to the role. Make it easy for us to see why youβd be a great fit!
Apply Through Our Website: Donβt forget to apply through our website! Itβs the best way for us to receive your application and ensures youβre considered for the role. Plus, it gives you a chance to explore more about what we do at StudySmarter.
How to prepare for a job interview at TiiQu
β¨Know Your Tech Stack
Make sure youβre well-versed in Python and SQL, as these are crucial for the role. Brush up on your backend engineering skills and be ready to discuss your experience with schema design and ingestion pipelines. Having specific examples from past projects will really help you stand out.
β¨Understand the Project's Goals
Familiarise yourself with the concept of a 'reference truth layer' and how it applies to verifying claims against datasets. Be prepared to discuss how your skills can contribute to the Verification Era and why truthful information matters in todayβs world.
β¨Showcase Your Data Experience
If you have experience with time-series data or geospatial data (like PostGIS), make sure to highlight that. Discuss any relevant projects where youβve worked with climate data formats, even if it's optional. This will demonstrate your versatility and relevance to the role.
β¨Prepare Questions
Interviews are a two-way street, so come armed with thoughtful questions about the team, the project, and the company culture. This shows your genuine interest and helps you assess if this is the right fit for you too. Ask about their approach to building the Claim Verification Service and how they envision collaboration within the team.