At a Glance
- Tasks: Design and deploy scalable data processing solutions using Python and PySpark.
- Company: Join a forward-thinking tech company focused on innovation and collaboration.
- Benefits: Competitive salary, flexible working options, and opportunities for professional growth.
- Why this job: Make an impact by building robust data solutions in a dynamic environment.
- Qualifications: Experience in Python, PySpark, and familiarity with Azure cloud services required.
- Other info: Exciting career development opportunities in a fast-paced, supportive team.
The predicted salary is between 36000 - 60000 £ per year.
We are looking for a Python Data Engineer with strong hands-on experience in Behave-based unit testing, PySpark development, Delta Lake optimisation, and Azure cloud services. This role focuses on designing and deploying scalable data processing solutions in a containerised environment, emphasising maintainable, configurable, and test-driven code delivery.
Key Responsibilities
- Develop and maintain data ingestion, transformation, and validation pipelines using Python and PySpark.
- Implement unit and behavior-driven testing with Behave, ensuring robust mocking and patching of dependencies.
- Design and maintain Delta Lake tables for optimised query performance, ACID compliance, and incremental data loads.
- Build and manage containerised environments using Docker for consistent development, testing, and deployment.
- Develop configurable, parameter-driven codebases to support modular and reusable data solutions.
- Integrate Azure services, including:
- Azure Functions for serverless transformation logic
- Azure Key Vault for secure credential management
- Azure Blob Storage for data lake operations
What We’re Looking For
- Proven experience in Python, PySpark, and Delta Lake.
- SC Cleared
- Strong knowledge of Behave for test-driven development.
- Experience with Docker and containerised deployments.
- Familiarity with Azure cloud services and data engineering best practices.
- Ability to deliver scalable, maintainable, and testable solutions in a fast-paced environment.
If you are interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career.
Data Engineer (Python) in London employer: hays-gcj-v4-pd-online
Contact Detail:
hays-gcj-v4-pd-online Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Python) in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with other data engineers. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Python, PySpark, and Azure. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical skills and understanding the latest trends in data engineering. Practice common interview questions and be ready to discuss your experience with Behave, Docker, and Delta Lake.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who take the initiative to reach out directly.
We think you need these skills to ace Data Engineer (Python) in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Python, PySpark, and Delta Lake. We want to see how your skills match the role, so don’t be shy about showcasing relevant projects or achievements!
Showcase Your Testing Skills: Since Behave-based unit testing is key for us, include any examples of how you've implemented test-driven development in your previous roles. This will show us you know your stuff when it comes to robust code delivery.
Highlight Your Azure Experience: If you've worked with Azure services like Functions, Key Vault, or Blob Storage, make sure to mention it! We’re keen on seeing how you’ve integrated these into your data solutions.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at hays-gcj-v4-pd-online
✨Know Your Tech Stack
Make sure you brush up on your Python, PySpark, and Delta Lake knowledge. Be ready to discuss specific projects where you've used these technologies, and think about how you can relate your experience to the role's requirements.
✨Showcase Your Testing Skills
Since Behave-based unit testing is crucial for this position, prepare examples of how you've implemented test-driven development in your past work. Be ready to explain your approach to mocking and patching dependencies.
✨Containerisation is Key
Familiarise yourself with Docker and containerised environments. You might be asked about your experience in managing these setups, so have a couple of examples ready that highlight your ability to create consistent development and deployment processes.
✨Azure Knowledge is Essential
As Azure services are part of the job, make sure you understand how Azure Functions, Key Vault, and Blob Storage work. Prepare to discuss how you've integrated these services into your data solutions in the past.