At a Glance
- Tasks: Design and build scalable data pipelines using Databricks on Azure.
- Company: Capco, a leading consulting firm in technology and engineering.
- Benefits: Competitive salary, health insurance, flexible holidays, and continuous learning opportunities.
- Why this job: Join a team shaping the future of digital transformation in finance.
- Qualifications: Experience with Databricks, Python, and data pipeline development.
- Other info: Collaborative culture with a focus on innovation and personal growth.
The predicted salary is between 48000 - 72000 £ per year.
Location: London (Hybrid) | Practice Area: Technology & Engineering | Type: Permanent
Engineer advanced data solutions at scale in a world-class consulting environment.
The Role
As a Senior Azure Data Engineer (Databricks) at Capco, you will play a hands-on role in designing, building, and deploying scalable and secure data engineering pipelines using the Databricks platform on Azure. You’ll partner with clients to understand their data needs and develop innovative solutions that drive business transformation. You’ll also contribute to best practice implementation and continuous improvement within cross-functional engineering teams.
Responsibilities
- Design and develop robust pipelines using DeltaLake, Spark Structured Streaming, and Unity Catalog
- Build real-time event-driven solutions with tools such as Kafka and Azure Event Hubs
- Apply DevOps principles to develop CI/CD pipelines using Azure DevOps, Jenkins, or GitHub Actions
- Collaborate with clients and stakeholders to translate data needs into strategic technical solutions
- Champion clean code, data lifecycle optimisation, and software engineering best practices
What We’re Looking For
- Proven hands-on experience with Databricks platform and orchestration
- Strong skills in Python, PySpark, and SQL, with knowledge of distributed data systems
- Expertise in developing full lifecycle data pipelines across ingestion, transformation, and serving layers
- Experience with data lakehouse architecture, schema design, and GDPR-compliant solutions
- Working knowledge of DevOps tools and CI/CD processes
Bonus Points For
- Development experience in Scala or Java
- Familiarity with Cloudera, Hadoop, HIVE, and Spark ecosystem
- Understanding of data privacy regulations, including GDPR, and experience working with sensitive data
- Ability to learn and adapt new technologies quickly to meet business needs
- Collaborative mindset with a passion for innovation and continuous learning
Why Join Capco
- Deliver high-impact technology solutions for Tier 1 financial institutions
- Work in a collaborative, flat, and entrepreneurial consulting culture
- Access continuous learning, training, and industry certifications
- Be part of a team shaping the future of digital financial services
- Help shape the future of digital transformation across FS & Energy.
We offer a competitive, people-first benefits package designed to support every aspect of your life:
- Core Benefits: Discretionary bonus, competitive pension, health insurance, life insurance and critical illness cover.
- Mental Health: Easy access to CareFirst, Unmind, Aviva consultations, and in-house first aiders.
- Family-Friendly: Maternity, adoption, shared parental leave, plus paid leave for sickness, pregnancy loss, fertility treatment, menopause, and bereavement.
- Family Care: 8 complimentary backup care sessions for emergency childcare or elder care.
- Holiday Flexibility: 5 weeks of annual leave with the option to buy or sell holiday days based on your needs.
- Continuous Learning: Your growth, your way - minimum 40 hours of training annually. Take your pick; workshops, certifications, e-learning. Also, Business Coach assigned from Day One: Get one-on-one guidance to fast-track your goals and accelerate your development.
- Extra Perks: Gympass(Wellhub), travel insurance, Tastecard, season ticket loans, Cycle to Work, and dental insurance.
Inclusion at Capco
We’re committed to making our recruitment process accessible and straightforward for everyone. If you need any adjustments at any stage, just let us know – we’ll be happy to help. We value each person’s unique perspective and contribution. At Capco, we believe that being yourself is your greatest strength. Our #BeYourselfAtWork culture encourages individuality and collaboration – a mindset that shapes how we work with clients and each other every day.
Senior Azure Data Engineer (Databricks) employer: Capco
Contact Detail:
Capco Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Azure Data Engineer (Databricks)
✨Tip Number 1
Network like a pro! Reach out to your connections in the industry, especially those who work at Capco or similar firms. A friendly chat can lead to insider info about job openings and even referrals.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects with Databricks, DeltaLake, and any real-time solutions you've built. This will give you an edge during interviews and demonstrate your hands-on experience.
✨Tip Number 3
Prepare for technical interviews by brushing up on your Python, PySpark, and SQL skills. Practice coding challenges and be ready to discuss your approach to building data pipelines and optimising data lifecycles.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, it shows you're genuinely interested in joining the team at Capco.
We think you need these skills to ace Senior Azure Data Engineer (Databricks)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Azure Data Engineer role. Highlight your experience with Databricks, Python, and data pipelines. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our team. Be sure to mention any relevant projects or experiences that showcase your expertise.
Showcase Your Problem-Solving Skills: In your application, don’t just list your skills—show us how you've used them to solve real-world problems. We love candidates who can demonstrate their ability to innovate and drive business transformation through data solutions.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy to do!
How to prepare for a job interview at Capco
✨Know Your Tech Inside Out
Make sure you’re well-versed in the Databricks platform and its components like DeltaLake and Spark Structured Streaming. Brush up on your Python, PySpark, and SQL skills, as these will be crucial during technical discussions.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific examples where you've designed and built data pipelines or event-driven solutions. Highlight how you’ve collaborated with clients to meet their data needs and any innovative solutions you’ve implemented.
✨Familiarise Yourself with DevOps Practices
Since the role involves applying DevOps principles, be ready to talk about your experience with CI/CD pipelines using tools like Azure DevOps or Jenkins. Share how you’ve integrated these practices into your previous projects.
✨Emphasise Continuous Learning
Capco values a growth mindset, so be prepared to discuss how you keep up with new technologies and best practices in data engineering. Mention any relevant training or certifications you’ve pursued to enhance your skills.