Pinewood.AI is looking for a skilled and experienced Data Engineer to help shape the future of data solutions in the automotive technology space.
In this role, you\βll be instrumental in developing scalable, modern data infrastructure that supports our global Automotive Intelligence Platform β the system that powers thousands of dealerships worldwide.
You\βll take ownership of the full data lifecycle, from extracting and transforming data to optimising performance and developing secure, scalable storage solutions.
If you\βre passionate about building clean, robust cloud-based pipelines, working with large and complex datasets, and applying the latest technologies (including AI features), this is the role for you.
Key Responsibilities:
Build and maintain a unified data platform that ingests and processes global data from across our Automotive Intelligence Platform.
Develop scalable and reusable data solutions with a strong emphasis on componentisation.
Optimise the performance and reliability of data pipelines, ensuring fast access to large datasets.
Collaborate with the data visualisation team to align back-end processing with front-end reporting.
Design and implement secure, flexible data access models for internal and external users.
Use bespoke pipelines and Azure Data Factory to incorporate 3rd party external data sources.
Establish unit and integration testing practices and support CI/CD processes for data pipelines.
Identify and resolve bottlenecks or performance issues across the data stack.
Investigate and address platform support tickets related to data.
Enable multi-language capabilities within the platform\βs data presentation layer.
Explore and integrate AI capabilities to boost data productivity and accuracy.
Strong understanding of data engineering concepts, including:
Lakehouse architecture and Delta Lake
Data warehousing
ETL/ELT pipelines
Change Data Capture (CDC) and change tracking
Stream processing
Database design
Machine Learning and AI integration
Hands-on experience with:
Azure Databricks
Python / PySpark
Microsoft SQL Server
Azure Blob Storage
Parquet file formats
Azure Data Factory
Proven experience building secure, scalable, and high-performing data pipelines.
Ability to solve complex technical problems and work collaboratively across teams.
Excellent communication and documentation skills.
Self-motivated with a proactive approach to continuous improvement.
Desirable Experience:
Experience in the retail sector, especially automotive retail.
Background in delivering large-scale, enterprise-grade data solutions.
Familiarity with Agile methodologies and working in cross-functional teams.
Competitive salary based on experience
Bonus scheme
Share scheme
Hybrid working
25 days holiday plus all UK
Contact Detail:
Internetwork Expert Recruiting Team