At a Glance
- Tasks: Build and maintain scalable data pipelines and ensure data quality across platforms.
- Company: Join a dynamic team supporting the Royal Navy's Maritime Domain Awareness.
- Benefits: Competitive salary, hands-on experience, and opportunities for professional growth.
- Other info: Collaborative environment with a focus on innovation and problem-solving.
- Why this job: Make a real impact by delivering reliable data solutions in a vital sector.
- Qualifications: Experience with data pipelines, strong Python and SQL skills required.
The predicted salary is between 40000 - 50000 £ per year.
We are seeking a Data Services Engineer to join a Capability Engineering team delivering data platforms and pipelines that underpin operational and analytical services supporting Royal Navy Maritime Domain Awareness outputs. This is a hands‑on engineering role focused on building reliable, scalable and secure data solutions. You will work across the full data lifecycle, from ingestion through to delivery, ensuring data is accessible, trusted and usable.
What You Will Do
- Build and maintain scalable data pipelines (ETL/ELT) across multiple systems.
- Integrate data from diverse sources including databases, APIs and enterprise systems.
- Ensure data quality, integrity and availability across platforms.
- Monitor and optimise data pipelines for performance and reliability.
- Troubleshoot and resolve complex data and platform issues.
- Contribute to automation and CI/CD of data services.
- Collaborate with stakeholders to understand and deliver data requirements.
- Producing technical documentation, runbooks and operational artefacts.
What We Are Looking For
Essential
- Experience building and maintaining data pipelines.
- Strong Python and SQL skills.
- Experience working with structured data and databases.
- Understanding of data modelling and transformation techniques.
- Experience with Linux environments.
- Strong problem‑solving and analytical skills.
- Ability to work independently within defined standards.
Desirable
- Experience with orchestration tools or data platforms.
- Exposure to DevOps practices and automation.
- Experience in secure or regulated environments.
- Familiarity with service management tools (e.g. Jira, Confluence).
Data Services Engineer employer: Carbon 60
Contact Detail:
Carbon 60 Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Services Engineer
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups or webinars, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines and projects. This is your chance to demonstrate your Python and SQL prowess, so make it shine when you get the opportunity to chat with hiring managers.
✨Tip Number 3
Prepare for those interviews! Brush up on your problem-solving skills and be ready to tackle real-world scenarios. Practice explaining your thought process clearly, as communication is key in collaborative environments.
✨Tip Number 4
Don’t forget to apply through our website! We’re always on the lookout for talented individuals like you. Keep an eye on our job listings and make sure your application stands out by tailoring it to the role.
We think you need these skills to ace Data Services Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with data pipelines and showcases your Python and SQL skills. We want to see how your background aligns with the role, so don’t be shy about emphasising relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how you can contribute to our team. Keep it concise but engaging – we love a good story!
Showcase Your Problem-Solving Skills: In your application, give examples of how you've tackled complex data issues in the past. We’re looking for those strong analytical skills, so don’t hold back on sharing your successes!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at Carbon 60
✨Know Your Data Pipelines
Make sure you can talk confidently about your experience with building and maintaining data pipelines. Be ready to discuss specific projects where you've used ETL/ELT processes, and how you ensured data quality and integrity.
✨Show Off Your Python and SQL Skills
Brush up on your Python and SQL knowledge before the interview. Prepare to answer technical questions or even solve coding challenges that demonstrate your proficiency in these languages, as they are crucial for the role.
✨Understand the Full Data Lifecycle
Familiarise yourself with the entire data lifecycle from ingestion to delivery. Be prepared to explain how you’ve managed data at each stage and how you ensure it remains accessible and usable throughout.
✨Be Ready to Troubleshoot
Think of examples where you've successfully troubleshot complex data issues. Highlight your problem-solving skills and how you approach optimising data pipelines for performance and reliability.