At a Glance
- Tasks: Design and build data pipelines for AI chat assistants and custom models.
- Company: Join ABS Consulting's innovative AI Practice Team.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Other info: Collaborative environment with a focus on cutting-edge technology and career advancement.
- Why this job: Transform messy data into impactful AI solutions and drive real business results.
- Qualifications: Bachelor's degree in a related field and 6+ years of data engineering experience.
The predicted salary is between 60000 - 80000 ÂŁ per year.
ABS is seeking an exceptional Data Engineer to join us full-time on our Artificial Intelligence (AI) Practice Team. In this role, you will design and operate the data foundations that power AI chat assistants, custom AI models, and AI-driven process optimization for ABS Consulting clients. You will build robust pipelines that integrate structured and unstructured data, standardize and tag enterprise content, and enable scalable, low-latency retrieval for AI workloads. Working closely with AI engineers, consultants, and domain experts, you will turn messy real-world data into production-grade data assets that deliver measurable business impact.
What You Will Do
- Design, build, and maintain scalable ETL/ELT pipelines to ingest, clean, and transform structured and unstructured data for AI assistants and custom models.
- Integrate diverse knowledge repositories (documents, policies, procedures, standards, databases) into centralized data platforms that support retrieval-augmented generation (RAG) and search.
- Implement data standardization, normalization, and tagging pipelines to align content with enterprise taxonomies and ontologies.
- Collaborate with AI/ML engineers to productionize model‑ready datasets, feature stores, and embeddings for prediction, classification, and optimization use cases.
- Optimize data workflows for reliability, cost, and performance across batch and streaming workloads, including monitoring, alerting, and capacity planning.
- Establish and enforce data quality, lineage, and governance practices to ensure trustworthy inputs to AI systems and process‑automation solutions.
- Automate and templatize common data engineering patterns to accelerate delivery across multiple client engagements and industry domains.
- Partner with consultants and business stakeholders to translate process optimization and analytics requirements into robust, maintainable data solutions.
What You Will Need
Education and Experience
- Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a closely related technical field; Master’s degree preferred.
- 6+ years of professional data engineering experience designing, building, and operating production data solutions.
- Demonstrated experience working in data‑intensive environments (e.g., analytics platforms, AI/ML workloads, large‑scale content repositories, or enterprise data platforms).
- Hands‑on experience delivering solutions on at least one major cloud provider (AWS, Azure, or Google Cloud), including managed data and analytics services.
Knowledge, Skills, and Abilities
- Strong command of SQL and at least one programming language commonly used in data engineering (Python preferred) for building production‑grade data pipelines.
- Hands‑on experience with modern data processing frameworks and platforms (e.g., Spark, Databricks, Snowflake, BigQuery, Synapse, or similar).
- Proficiency with ETL/ELT orchestration tools and workflows (e.g., Airflow, dbt, Azure Data Factory, AWS Glue, or equivalent).
- Experience designing and operating data lakes/lakehouses and integrating multiple data sources (relational, NoSQL, files, APIs) into cohesive data models.
- Deep experience working with unstructured and semi‑structured data (documents, PDFs, JSON, logs), including content extraction, normalization, and metadata/tagging.
- Familiarity with AI/ML data patterns, including feature engineering, embeddings, vector databases, and retrieval‑augmented generation (RAG) pipelines.
- Strong understanding of data modeling, data quality, data governance, and lineage practices for regulated or compliance‑sensitive environments.
- Proficiency with cloud‑native data services (e.g., S3/ADLS/GCS, managed warehouses, streaming services like Kafka/Kinesis/Event Hubs).
- Solid grounding in software engineering best practices (version control, CI/CD, testing, code review) as applied to data engineering.
- Must hold a valid right to work status in the UK.
Reporting Relationships
This role reports to a project manager and does not initially include direct reports.
Locations
Data Engineer - AI Practice Team in Cheshire, Warrington employer: American Bureau of Shipping
Contact Detail:
American Bureau of Shipping Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer - AI Practice Team in Cheshire, Warrington
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data engineering projects, especially those involving AI and cloud platforms. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical skills and understanding the latest trends in data engineering and AI. Practice common interview questions and be ready to discuss your past projects in detail.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace Data Engineer - AI Practice Team in Cheshire, Warrington
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with ETL/ELT pipelines and any cloud platforms you've worked with. We want to see how your skills align with what we do at ABS!
Showcase Your Projects: Include specific projects where you've designed or operated data solutions. If you've worked on AI-related projects, definitely mention those! We love seeing real-world applications of your skills.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Explain why you're excited about the role and how you can contribute to our AI Practice Team. Let us know what makes you a great fit for ABS.
Apply Through Our Website: Don't forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. We can’t wait to hear from you!
How to prepare for a job interview at American Bureau of Shipping
✨Know Your Data Engineering Tools
Make sure you brush up on your knowledge of SQL, Python, and any data processing frameworks like Spark or Databricks. Be ready to discuss how you've used these tools in past projects, especially in building ETL/ELT pipelines.
✨Showcase Your Problem-Solving Skills
Prepare to share specific examples of how you've tackled messy data challenges. Highlight your experience with unstructured data and how you've turned it into production-grade assets that deliver real business impact.
✨Understand AI Workflows
Familiarise yourself with AI/ML data patterns, especially around feature engineering and retrieval-augmented generation (RAG). Being able to discuss how you’ve collaborated with AI engineers to productionise datasets will set you apart.
✨Ask Insightful Questions
Prepare thoughtful questions about the company's data governance practices and how they ensure data quality. This shows your interest in their processes and your commitment to maintaining high standards in data engineering.