At a Glance
- Tasks: Support and develop data pipelines using Azure Databricks, SQL, and Python.
- Company: Join Correla, a forward-thinking company transforming the energy market.
- Benefits: Enjoy uncapped annual leave, private healthcare, and a generous pension contribution.
- Other info: Diverse and inclusive workplace with excellent career growth opportunities.
- Why this job: Make a real impact on data-driven projects for a net-zero future.
- Qualifications: Experience with Azure Databricks, Python, and strong SQL skills required.
The predicted salary is between 40000 - 40000 ÂŁ per year.
About Us
In March 2021, Correla was created as an independently owned business to bring in private investment to fuel innovation in the centre of the energy market and beyond. Correla is derived from correlation, because we're all about exploring and enhancing relationships between data, people, and processes. Our SaaS products and Managed Service solutions combine to power industry innovation, simplify an increasingly complex market, and deliver cost and operational efficiencies. Our goal is to support industry transformation, to move to a netâzero future and to positively impact the endâconsumer.
About the Role
- Support the development, maintenance, and monitoring of data pipelines across Azure Databricks using SQL and Python (PySpark)
- Identify and troubleshoot pipeline issues, including performance, failures, and data quality concerns
- Apply data quality checks, validation routines, and basic testing to ensure reliable data output
- Use data modelling principles (e.g. dimensional modelling) to support scalable reporting and analytics
- Collaborate with BI teams and stakeholders to deliver accurate data for Power BI reporting
- Contribute to onboarding new data sources and improving data pipeline architecture and stability
- Follow best practices in documentation, code development, version control, and data governance
About you
- Working knowledge of Azure Databricks and Python (with exposure to PySpark) for supporting data pipelines
- Strong SQL skills for querying, validating and troubleshooting data
- Understanding of data pipeline architecture (ingestion, transformation, delivery)
- Familiarity with data modelling concepts (e.g. Dimensional modelling, fact/dimension tables)
- Ability to apply data quality checks, validation and basic testing
- Structured approach to troubleshooting, with clear documentation and effective stakeholder collaboration
What We Offer
- Location for your day
- Uncapped annual leave
- 6-12% Pension Contribution
- Private Healthcare
- 26 weeks' full pay equal parent leave
- Wellbeing Services
- And more!
EEO Statement
At Correla, we are committed to working towards being a more diverse and inclusive workplace where our people can truly be themselves. We recognise the benefits of having talented people from a range of backgrounds and cultures who bring different perspectives, life experiences and diversity of thinking. Our aim is to attract and retain the very best diverse talent to help create an exciting, innovative, and successful business that enables us to deliver an exceptional experience for our customers. We would therefore like to encourage applications from people with varied skillsets and experience and from different backgrounds and sectors to help shape our future. Correla is an Equal Opportunities Employer. We believe in equality of opportunity regardless of race or racial group, ancestry, place of origin, ethnicity, sex, sexual orientation, gender identity, gender expression, gender reâassignment, age, record of offences, marital/civil partnership status, family status, pregnancy, maternity and paternity, religion/belief or disability. We promise that your opportunity for employment with us depends solely on your qualifications and relevant experience.
Associate Data Engineer employer: Correla
Contact Detail:
Correla Recruiting Team
StudySmarter Expert Advice đ¤Ť
We think this is how you could land Associate Data Engineer
â¨Network Like a Pro
Get out there and connect with people in the industry! Attend meetups, webinars, or even just chat with folks on LinkedIn. Building relationships can open doors that a CV just can't.
â¨Show Off Your Skills
Donât just talk about your experience; demonstrate it! Create a portfolio showcasing your projects, especially those involving Azure Databricks and Python. This will give potential employers a taste of what you can do.
â¨Ace the Interview
Prepare for common interview questions but also be ready to discuss specific scenarios where you've solved problems or improved processes. Use the STAR method (Situation, Task, Action, Result) to structure your answers.
â¨Apply Through Our Website
We encourage you to apply directly through our website. It shows you're genuinely interested in joining us at Correla and makes it easier for us to track your application!
We think you need these skills to ace Associate Data Engineer
Some tips for your application đŤĄ
Tailor Your CV: Make sure your CV is tailored to the Associate Data Engineer role. Highlight your experience with Azure Databricks, SQL, and Python, and donât forget to mention any relevant projects or achievements that showcase your skills.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why youâre passionate about data engineering and how your background aligns with our mission at Correla. Be genuine and let your personality come through!
Showcase Your Problem-Solving Skills: In your application, give examples of how you've identified and solved data pipeline issues in the past. We love candidates who can demonstrate a structured approach to troubleshooting and effective collaboration with stakeholders.
Apply Through Our Website: We encourage you to apply directly through our website. Itâs the best way for us to receive your application and ensures youâre considered for the role. Plus, it shows youâre keen on joining our team!
How to prepare for a job interview at Correla
â¨Know Your Tech
Make sure you brush up on your Azure Databricks and Python skills, especially PySpark. Be ready to discuss how you've used these technologies in past projects or coursework, as this will show your practical understanding of the tools they'll expect you to use.
â¨SQL Savvy
Since strong SQL skills are a must, practice writing queries that validate and troubleshoot data. You might be asked to solve a problem on the spot, so being comfortable with SQL syntax and functions will give you an edge.
â¨Data Pipeline Knowledge
Familiarise yourself with data pipeline architecture, including ingestion, transformation, and delivery. Be prepared to explain how you would approach troubleshooting issues and ensuring data quality, as this is crucial for the role.
â¨Collaborative Spirit
Highlight your experience working with BI teams and stakeholders. Share examples of how you've effectively communicated technical concepts to non-technical team members, as collaboration is key in this role.