At a Glance
- Tasks: Design and implement scalable data pipelines for a major offshore wind project.
- Company: Join a leading company in renewable energy and make a difference.
- Benefits: Competitive salary, flexible working options, and opportunities for professional growth.
- Other info: Collaborative environment with a focus on sustainability and cutting-edge technology.
- Why this job: Be at the forefront of renewable energy and contribute to innovative data solutions.
- Qualifications: Master’s degree in a relevant field and 4-7 years of data engineering experience.
The predicted salary is between 60000 - 80000 £ per year.
Data Engineer for a major offshore wind project in The United Kingdom.
Responsibilities:
- Design and implement scalable ingestion pipelines from multiple source systems including internal business data sources.
- Ensure reliable, automated, and monitored data flows into the Bronze layer of the Medallion architecture.
- Work within client’s existing security framework to establish compliant connectivity to operational data sources.
- Build and maintain Silver and Gold layer transformations in Databricks using Python and SQL.
- Onboard datasets into Unity Catalog, ensuring proper governance, lineage, and discoverability.
- Support the ML/Data Scientist in preparing clean, structured datasets for anomaly detection and asset performance modelling.
- Contribute to technical documentation and ensure pipelines are maintainable and transferable.
- Stay current on Databricks and Azure platform developments relevant to the stack.
- Support the Digital & AI Strategy Manager in assessing feasibility of new data source integrations as the roadmap evolves.
Experience:
- Master’s degree in Computer Science, Data Engineering, Software Engineering, or a related technical field.
- Professional certifications in Azure, Databricks preferred.
- Training or background in energy systems, renewable energy, offshore wind or BESS technologies is a strong plus.
- 4-7 years of hands‑on data engineering experience in a cloud environment.
- Demonstrated experience delivering production pipelines on Databricks and Azure (ADLS Gen2, ADF or equivalent).
- Proven ability to implement Medallion architecture or equivalent layered data modelling patterns.
- Experience with REST API ingestion and integration of business systems (ERP, finance tools).
- Experience in a contractor or project‑based delivery model preferred.
- Exposure to OT/SCADA environments or energy sector data.
- Exposure to MLOps workflows or collaboration with data science teams.
Data Engineer employer: Taylor Hopkinson | Powered by Brunel
Contact Detail:
Taylor Hopkinson | Powered by Brunel Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Network Like a Pro
Get out there and connect with folks in the industry! Attend meetups, webinars, or even local events related to data engineering. The more people you know, the better your chances of landing that dream job.
✨Show Off Your Skills
Don’t just talk about your experience; showcase it! Create a portfolio of projects you've worked on, especially those involving Databricks and Azure. This will give potential employers a clear view of what you can bring to the table.
✨Ace the Interview
Prepare for technical interviews by brushing up on your Python and SQL skills. Be ready to discuss your past projects and how you’ve implemented scalable data pipelines. Confidence is key, so practice makes perfect!
✨Apply Through Our Website
We’ve got some fantastic opportunities waiting for you! Make sure to apply through our website to get the best chance at landing a role that fits your skills and aspirations. Don’t miss out!
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with data pipelines, Databricks, and Azure. We want to see how your skills match what we're looking for!
Showcase Relevant Experience: When writing your application, focus on your hands-on experience in cloud environments and any projects related to offshore wind or renewable energy. This will help us see your fit for the role.
Be Clear and Concise: Keep your application clear and to the point. Use bullet points where possible to make it easy for us to read through your qualifications and experiences quickly.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role without any hiccups!
How to prepare for a job interview at Taylor Hopkinson | Powered by Brunel
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, especially Databricks and Azure. Brush up on your Python and SQL skills, as you'll likely be asked to demonstrate your knowledge of these during the interview.
✨Understand the Medallion Architecture
Familiarise yourself with the Medallion architecture and how it applies to data ingestion and transformation. Be prepared to discuss how you've implemented similar architectures in past projects, as this will show your practical experience.
✨Showcase Your Problem-Solving Skills
Prepare examples of how you've tackled challenges in data engineering, particularly in cloud environments. Think about specific instances where you had to ensure data reliability or compliance with security frameworks, as these are key aspects of the role.
✨Ask Insightful Questions
Come prepared with questions that show your interest in the company’s projects and future developments. Inquire about their current data sources, the challenges they face, or how they envision the evolution of their Digital & AI strategy. This demonstrates your enthusiasm and forward-thinking mindset.