At a Glance
- Tasks: Design and build robust data platforms and pipelines using Azure and Databricks.
- Company: Join Jacobs, a global leader in solving critical problems for a better tomorrow.
- Benefits: Flexible working arrangements, well-being benefits, and opportunities for personal growth.
- Why this job: Make a real impact by transforming industries with cutting-edge AI solutions.
- Qualifications: Experience with SQL, Python, and cloud data platforms; passion for learning is key.
- Other info: Inclusive culture that values collaboration and supports mental health and belonging.
The predicted salary is between 36000 - 60000 £ per year.
At Jacobs, we are challenging today to reinvent tomorrow by solving the world’s most critical problems for thriving cities, resilient environments, mission-critical outcomes, operational advancement, scientific discovery and cutting-edge manufacturing.
Your impact: Enjoy designing elegant data systems, shipping production-grade code, and seeing your work make a measurable difference. Our team builds data platforms and AI solutions that power critical infrastructure, transform operations, and move entire industries.
In this role, you will work with a broad set of clients on high-scale problems, with the backing of a global organisation, investing heavily in Azure, Databricks, and applied AI. You will primarily work on Azure + Databricks (Spark, Delta Lake, Unity Catalog) - shipping modern ELT/ETL, streaming and batch data products, and ML/AI pipelines, operating at serious scale across water, transport, energy, and more.
Responsibilities:
- Design & build robust data platforms and pipelines on Azure and Databricks (batch + streaming) using Python/SQL, Spark, Delta Lake, and Data Lakehouse patterns.
- Develop AI-enabling foundations: feature stores, ML-ready datasets, and automated model-serving pathways (MLflow, model registries, CI/CD).
- Own quality & reliability: testing (dbx/pytest), observability (metrics, logging, lineage), and cost/performance optimisation.
- Harden for enterprise: security-by-design, access patterns with Unity Catalog, data governance, and reproducible environments.
- Automate the boring stuff: IaC (Terraform/Bicep), CI/CD (Azure DevOps/GitHub Actions), and templated project scaffolding.
- Partner with clients: translate business problems into technical plans, run workshops, and present trade-offs with clarity.
- Ship value continuously: iterate, review, and release frequently; measure outcomes, not just outputs.
Qualifications:
- Utilising SQL and Python for building reliable data pipelines.
- Hands-on with Spark (preferably Databricks) and modern data modelling (e.g., Kimball/Inmon/Data Vault, lakehouse).
- Experience running on a cloud data platform (ideally Azure).
- Sound software delivery practices: Git, CI/CD, testing, Agile ways of working.
- Streaming/event-driven designs (Event Hubs, Kafka, Structured Streaming).
- MPP/Data Warehouses (Synapse, Snowflake, Redshift) and NoSQL (Cosmos DB).
- ML enablement: feature engineering at scale, MLflow, basic model lifecycle know-how.
- Infrastructure-as-code (Terraform/Bicep) and platform hardening.
Culture & inclusion: Joining Jacobs connects you locally and globally. Our values stand on a foundation of safety, integrity, inclusion and belonging. We put people at the heart of our business, and we truly believe that by supporting one another through our culture of caring, we all succeed.
We value positive mental health and a sense of belonging for all employees. With safety and flexibility always top of mind, we’ve gone beyond traditional ways of working so you have the support, means and space to maximise your potential.
We aim to embed inclusion and belonging in everything we do. We know that if we are inclusive, we’re more connected and creative. We are committed to building vibrant communities within Jacobs, including through our Jacobs Employee Networks, Communities of Practice and our Find Your Community initiatives.
As a disability confident employer, we will interview disabled candidates who best meet the criteria. We welcome applications from candidates who are seeking flexible working and from those who may not meet all the listed requirements for a role.
We value collaboration and believe that in-person interactions are crucial for both our culture and client delivery. We empower employees with our hybrid working policy, allowing them to split their work week between Jacobs offices/projects and remote locations.
Data Engineer - Databricks / AI employer: Jacobs
Contact Detail:
Jacobs Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer - Databricks / AI
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data projects, especially those involving Azure and Databricks. This gives you a chance to demonstrate your expertise and makes you stand out when chatting with recruiters.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your thought process and how you tackle problems, especially around building data pipelines and using ML tools.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, it shows you’re genuinely interested in joining our team at Jacobs and making a difference in the world.
We think you need these skills to ace Data Engineer - Databricks / AI
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Data Engineer role. Highlight your experience with Azure, Databricks, and any relevant data engineering projects you've worked on.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data engineering and how you can contribute to our mission. Be specific about your experience with Python, SQL, and any AI solutions you've developed.
Showcase Your Projects: If you've worked on any notable projects, especially those involving data pipelines or machine learning, make sure to mention them. We love seeing real-world applications of your skills!
Apply Through Our Website: We encourage you to apply directly through our website for the best chance of getting noticed. It’s the easiest way for us to keep track of your application and ensure it reaches the right people.
How to prepare for a job interview at Jacobs
✨Know Your Tech Stack
Make sure you’re well-versed in Azure, Databricks, and the specific tools mentioned in the job description. Brush up on your Python and SQL skills, especially with PySpark and Delta Lake. Being able to discuss your experience with these technologies will show that you're ready to hit the ground running.
✨Showcase Your Problem-Solving Skills
Prepare to discuss how you've tackled complex data challenges in the past. Think of specific examples where you designed data platforms or pipelines that made a measurable impact. This will demonstrate your ability to translate business problems into technical solutions, which is key for this role.
✨Emphasise Collaboration
Since Jacobs values teamwork, be ready to talk about your experiences working in collaborative environments. Share examples of how you’ve partnered with clients or colleagues to achieve common goals, and highlight any workshops or presentations you’ve led.
✨Ask Insightful Questions
Prepare thoughtful questions about the company culture, team dynamics, and ongoing projects. This not only shows your interest in the role but also helps you gauge if the company aligns with your values, especially regarding inclusion and belonging.