At a Glance
- Tasks: Transform raw data into insightful assets and build scalable models using Databricks and Azure.
- Company: Join a forward-thinking firm focused on data innovation in London.
- Benefits: Competitive salary, generous benefits package, and opportunities for professional growth.
- Why this job: Make a real impact by driving smarter decisions and enabling AI across the organisation.
- Qualifications: Experience in data modelling and familiarity with SQL, Python, or PySpark.
- Other info: Dynamic team culture with a focus on continuous improvement and collaboration.
The predicted salary is between 36000 - 60000 ÂŁ per year.
Location: London - various locations
Salary: Competitive salary plus a generous benefits package
Application Deadline: Tuesday, November 25, 2025
Job Summary: We are looking for an Analytics Engineer to join our Data team and help build the firm’s modern data foundation. You’ll design governed, reusable models and scalable transformation layers using Databricks and Azure, turning raw data into trusted, insight‑ready assets. With a focus on data modelling, CI/CD, and interoperability, you’ll enable analytics, AI, and business intelligence across the organisation—driving smarter decisions, automation, and exceptional outcomes.
Key accountabilities:
- Data Modelling: Build and maintain clean, reusable, and scalable models that transform raw data into curated layers of logic, metrics, and dimensions. Define, document, and own business‑critical metrics to drive consistency and trust across reporting and AI/ML applications. Design data structures (e.g. star/snowflake schemas) in Databricks and Azure environments that optimise query performance and user accessibility. Partner with dashboard developers and analysts to shape models that align to visual and operational use cases.
- Pipeline Development & Deployment: Build and maintain efficient ELT pipelines using Databricks (SQL, Python, PySpark), ensuring they are monitored, observable, and recoverable. Implement CI/CD workflows for analytics assets using Azure DevOps or GitHub Actions, ensuring reliable, version‑controlled deployments. Set up robust data validation, alerting and testing practices to ensure high data quality and transparency. Collaborate with Data Engineers to ensure upstream data ingestion and structures meet transformation needs.
- Governance & Interoperability: Contribute to the firm’s data governance strategy through clear documentation, data contracts, lineage mapping, and metadata capture. Enable interoperability with internal systems (CRM, finance, digital platforms) and third‑party tools (GA4, ESPs, IMiX) through standardised, API‑ready data assets. Help define and maintain an internal data dictionary and analytics asset catalogue.
- Collaboration & Enablement: Act as a subject matter expert and partner to analysts, providing guidance on how to best use and extend curated models. Assist requirement‑gathering and technical discovery sessions with business stakeholders to inform solution design. Foster a culture of curiosity, continuous improvement, and modular design thinking within the wider Data & AI team.
- Innovation & Continuous Improvement: Explore opportunities to use AI‑assisted tools and code generation for improved development velocity and maintainability. Stay abreast of best practices in metadata‑driven design, open standards, and data model evolution. Help shape and refine our approach to analytics modularisation and downstream consumption by multiple tools and teams.
Key Competencies:
- Getting Things Done: Delivers on agreed objectives promptly; prioritises workload; remains professional under pressure.
- Communication & Sharing Knowledge: Confident, clear and accurate with all communication; maintains accurate records and makes effective use of new technology.
- Customer Service: Positive attitude to find solutions in line with TCF principles; Uses customer feedback to improve service.
- Effectiveness & Adaptability: Able to maintain a high volume of work, striving for continual improvements; understands individual contribution in relation to corporate objectives; presents a positive image and approach to change.
- Team Working: Shares knowledge, skills and experience with colleagues; understands team goals; is cooperative and supportive of others.
Analytics Engineer in City of London employer: Killik & Co
Contact Detail:
Killik & Co Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Analytics Engineer in City of London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry on LinkedIn or at local meetups. We all know that sometimes it’s not just what you know, but who you know that can land you that dream job.
✨Tip Number 2
Prepare for those interviews! Research the company and the role inside out. We want you to be able to showcase your skills in data modelling and pipeline development confidently. Practice common interview questions and have your own ready to go!
✨Tip Number 3
Show off your projects! If you've worked on any cool data models or pipelines, make sure to highlight them during your conversations. We love seeing real examples of your work and how you’ve tackled challenges in the past.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we’re always looking for passionate individuals to join our Data team, so don’t hold back!
We think you need these skills to ace Analytics Engineer in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Analytics Engineer role. Highlight your experience with data modelling, CI/CD, and any tools like Databricks and Azure that you’ve used. We want to see how your skills align with what we’re looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data and how you can contribute to our team. Don’t forget to mention specific projects or experiences that relate to the job description.
Showcase Your Projects: If you’ve worked on relevant projects, make sure to include them in your application. Whether it’s building ELT pipelines or designing data structures, we love seeing real examples of your work and how you’ve tackled challenges.
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of applications and ensures you get all the updates directly from us. Plus, it shows you’re keen on joining StudySmarter!
How to prepare for a job interview at Killik & Co
✨Know Your Data Models
Make sure you brush up on data modelling concepts, especially star and snowflake schemas. Be ready to discuss how you've designed and maintained scalable models in the past, and how they can drive consistency in reporting.
✨Showcase Your Pipeline Skills
Prepare to talk about your experience with ELT pipelines, particularly using Databricks and Azure. Highlight any CI/CD workflows you've implemented and how you've ensured data quality through validation and testing practices.
✨Understand Governance and Interoperability
Familiarise yourself with data governance strategies and be prepared to discuss how you've contributed to documentation and metadata capture. Show that you understand the importance of interoperability with internal systems and third-party tools.
✨Emphasise Collaboration
Be ready to share examples of how you've worked with analysts and business stakeholders to gather requirements and inform solution design. Highlight your ability to foster a culture of curiosity and continuous improvement within a team.