At a Glance
- Tasks: Design and optimise data pipelines using Databricks for scalable solutions.
- Company: Join a dynamic consulting firm focused on innovative data and AI solutions.
- Benefits: Enjoy competitive pay, performance incentives, and opportunities for professional growth.
- Why this job: Make a real impact in AI-driven consulting while collaborating with talented teams.
- Qualifications: 5+ years in data engineering, strong skills in Databricks, Python, and SQL required.
- Other info: Ideal for those passionate about data and eager to mentor junior engineers.
The predicted salary is between 43200 - 72000 £ per year.
We are an ambitious, newly established consulting firm focused on delivering cutting-edge solutions in data and AI. Our mission is to empower organisations to unlock the full potential of their data by leveraging platforms like Databricks alongside other emerging technologies.
As a Data Engineer (Databricks), you will be responsible for designing, implementing, and optimising large-scale data processing systems. You will work closely with clients, data scientists, and solution architects to ensure efficient data pipelines, reliable infrastructure, and scalable analytics capabilities. This role requires strong technical expertise, problem-solving skills, and the ability to work in a dynamic, client-facing environment.
Your Impact:
- Develop, implement, and optimise data pipelines and ETL processes on Databricks.
- Work closely with clients to understand business requirements and translate them into technical solutions.
- Design and implement scalable, high-performance data architectures.
- Ensure data integrity, quality, and security through robust engineering practices.
- Monitor, troubleshoot, and optimise data workflows for efficiency and cost-effectiveness.
- Collaborate with data scientists and analysts to facilitate machine learning and analytical solutions.
- Contribute to best practices, coding standards, and documentation to improve data engineering processes.
- Mentor junior engineers and support knowledge-sharing across teams.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines using Databricks, Spark, and Delta Lake.
- Develop efficient ETL/ELT workflows to process large volumes of structured and unstructured data.
- Implement data governance, security, and compliance standards.
- Work with cloud platforms such as AWS, Azure, or GCP to manage data storage and processing.
- Collaborate with cross-functional teams to enhance data accessibility and usability.
- Optimise data warehouse and lakehouse architectures for performance and cost efficiency.
- Maintain and improve CI/CD processes for data pipeline deployment and monitoring.
What We Are Looking For:
- 5+ years of experience in data engineering or related roles.
- Strong expertise in Databricks, Spark, Delta Lake, and cloud data platforms (AWS, Azure, or GCP).
- Proficiency in Python and SQL for data manipulation and transformation.
- Experience with ETL/ELT development and orchestration tools (e.g., Apache Airflow, dbt, Prefect).
- Knowledge of data modelling, data warehousing, and lakehouse architectures.
- Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code.
- Strong problem-solving skills and the ability to work in fast-paced environments.
- Excellent communication and stakeholder management skills.
Preferred Qualifications:
- Experience with machine learning data pipelines and MLOps practices.
- Knowledge of data streaming technologies such as Kafka or Kinesis.
- Familiarity with Terraform or similar infrastructure automation tools.
- Previous experience working in consulting or client-facing roles.
What We Offer:
- Competitive compensation, including performance-based incentives.
- Opportunities for professional growth and development in a fast-growing firm.
- A collaborative and supportive environment that values innovation, excellence, and client success.
If you’re passionate about data engineering and ready to make an impact in AI-driven consulting, we’d love to hear from you!
Data Engineer (Databricks) employer: Ethiq
Contact Detail:
Ethiq Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Databricks)
✨Tip Number 1
Familiarise yourself with Databricks and its ecosystem. Since this role heavily relies on Databricks, understanding its features, capabilities, and best practices will give you a significant edge during interviews.
✨Tip Number 2
Showcase your experience with cloud platforms like AWS, Azure, or GCP. Be prepared to discuss specific projects where you've implemented data solutions using these technologies, as this is crucial for the role.
✨Tip Number 3
Brush up on your Python and SQL skills. Since these are essential for data manipulation and transformation, being able to demonstrate your proficiency in these languages can set you apart from other candidates.
✨Tip Number 4
Prepare to discuss your problem-solving approach in dynamic environments. The role requires strong analytical skills, so having examples ready that illustrate how you've tackled complex data challenges will be beneficial.
We think you need these skills to ace Data Engineer (Databricks)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in data engineering, particularly with Databricks, Spark, and cloud platforms. Use specific examples to demonstrate your skills in building scalable data pipelines and optimising ETL processes.
Craft a Compelling Cover Letter: Write a cover letter that showcases your passion for data engineering and your understanding of the role. Mention how your previous experiences align with the responsibilities outlined in the job description, especially your ability to work in client-facing environments.
Showcase Technical Skills: In your application, emphasise your technical expertise in Python, SQL, and any relevant tools like Apache Airflow or dbt. Providing examples of past projects where you implemented these skills can strengthen your application.
Highlight Problem-Solving Abilities: Demonstrate your problem-solving skills by including specific instances where you overcame challenges in data engineering. This could involve optimising data workflows or ensuring data integrity and security in previous roles.
How to prepare for a job interview at Ethiq
✨Showcase Your Technical Skills
Be prepared to discuss your experience with Databricks, Spark, and Delta Lake in detail. Bring examples of projects where you've designed and optimised data pipelines, and be ready to explain the challenges you faced and how you overcame them.
✨Understand the Business Context
Research the consulting firm and their clients' industries. Be ready to discuss how your technical solutions can address specific business needs and improve data accessibility and usability for their clients.
✨Demonstrate Problem-Solving Abilities
Expect to face scenario-based questions that test your problem-solving skills. Prepare to walk through your thought process on how you would tackle complex data challenges, ensuring you highlight your analytical approach and decision-making.
✨Emphasise Collaboration and Communication
Since this role involves working closely with clients and cross-functional teams, be sure to share examples of how you've successfully collaborated in the past. Highlight your communication skills and how you manage stakeholder expectations.