At a Glance
- Tasks: Design and build scalable data pipelines for AI applications and manage cloud-native solutions.
- Company: Starcom, a global leader in communications planning and media with a supportive culture.
- Benefits: Flexible working, reflection days, family-friendly policies, and great local discounts.
- Other info: Join a recognised best workplace with excellent career growth opportunities.
- Why this job: Shape the future of AI while working with cutting-edge technologies and a dynamic team.
- Qualifications: 3+ years in data engineering, expertise in cloud platforms, and strong programming skills.
The predicted salary is between 48000 - 84000 £ per year.
Overview This role presents an opportunity to engage deeply with MLOps, vector databases, and Retrieval-Augmented Generation (RAG) pipelines – skills that are in incredibly high demand. If you are passionate about shaping the future of AI and thrive on complex, high-impact challenges, we encourage you to apply.Responsibilities Design and Build Scalable Data Pipelines: Architect, implement, and optimize robust, high-performance real-time and batch ETL pipelines to ingest, process, and transform massive datasets for LLMs and foundational AI models.Cloud-Native Innovation: Leverage your deep expertise across AWS, Azure, and/or GCP to build cloud-native data solutions, ensuring efficiency, scalability, and cost-effectiveness.Power Generative AI: Develop and manage specialized data flows for generative AI applications, including integrating with vector databases and constructing sophisticated RAG pipelines.Champion Data Governance & Ethical AI: Implement best practices for data quality, lineage, privacy, and security, ensuring our AI systems are developed and used responsibly and ethically.Tooling the Future: Get hands-on with cutting-edge technologies like Hugging Face, PyTorch, TensorFlow, Apache Spark, Apache Airflow, and other modern data and ML frameworks.Collaborate and Lead: Partner closely with ML Engineers, Data Scientists, and Researchers to understand their data needs, provide technical leadership, and translate complex requirements into actionable data strategies.Optimize and Operate: Monitor, troubleshoot, and continuously optimize data pipelines and infrastructure for peak performance and reliability in production environments.What You\’ll Bring We are seeking a seasoned professional who is excited by the unique challenges of AI data.Qualifications What are we looking for? Must-Have Skills Extensive Data Engineering Experience: 3+ years designing, building, and maintaining large-scale data pipelines and data warehousing solutions.Cloud Platform Mastery: Expert-level proficiency with at least one major cloud provider (GCP preferred, AWS, or Azure), including their data, compute, and storage services.Programming Prowess: Strong programming skills in Python and SQL.Big Data Ecosystem Expertise: Hands-on experience with Apache Spark, Kafka, and data orchestration tools such as Apache Airflow or Prefect.ML Data Acumen: Solid understanding of data requirements for machine learning models, including feature engineering, data validation, and dataset versioning.Vector Database Experience: Practical experience with vector databases (e.g., Pinecone, Milvus, Chroma) for embedding storage and retrieval.Generative AI Familiarity: Understanding of data paradigms for LLMs, RAG architectures, and how data pipelines support fine-tuning or pre-training.MLOps Principles: Familiarity with MLOps best practices for deploying and managing ML models in production.Data Governance & Ethics: Experience implementing data governance frameworks, ensuring data quality, privacy, and compliance, with awareness of ethical AI considerations.Bonus Points If You Have Direct experience with Hugging Face ecosystem, PyTorch, or TensorFlow for data preparation in an ML context.Experience with real-time data streaming architectures.Familiarity with containerization (Docker, Kubernetes).Master\’s or Ph.D. in Computer Science, Data Engineering, or a related quantitative field.Additional Information Starcom has fantastic benefits on offer to all of our employees. In addition to the classics, Pension, Life Assurance, Private Medical and Income Protection Plans, we also offer:WORK YOUR WORLD opportunity to work anywhere in the world, where there is a Publicis office, for up to 6 weeks a year.REFLECTION DAYS – Two additional days of paid leave to step away from your usual day-to-day work and create time to focus on your well-being and self-care.HELP@HAND BENEFITS 24/7 helpline to support you on a personal and professional level. Access to remote GPs, mental health support and CBT. Wellbeing content and lifestyle coaching.FAMILY FRIENDLY POLICIES We provide 26 weeks of full pay for the following family milestones: Maternity, Adoption, Surrogacy and Shared Parental Leave.FLEXIBLE WORKING, BANK HOLIDAY SWAP & BIRTHDAY DAY OFF You are entitled to an additional day off for your birthday, from your first day of employment.GREAT LOCAL DISCOUNTS This includes membership discounts with Soho Friends, local restaurants and retailers in Westfield White City and Television Centre.Full details of our benefits will be shared when you join us.Publicis Groupe operates a hybrid working pattern with full-time employees being office-based three days during the working week.We are supportive of all candidates and are committed to providing a fair assessment process. If you have any circumstances (such as neurodiversity, physical or mental impairments or a medical condition) that may affect your assessment, please inform your Talent Acquisition Partner. We will discuss possible adjustments to ensure fairness. Rest assured, disclosing this information will not impact your treatment in our process.Please make sure you check out the Publicis Career Page which showcases our Inclusive Benefits and our EAGs (Employee Action Groups).
#J-18808-Ljbffr
Senior Data Engineer employer: Starcom
Contact Detail:
Starcom Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects, especially those related to data engineering and AI. This gives you a chance to demonstrate your expertise beyond just a CV.
✨Tip Number 3
Prepare for interviews by practising common questions and scenarios specific to data engineering. Think about how you can relate your past experiences to the role at Starcom, especially around MLOps and cloud technologies.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our awesome team at Starcom.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Data Engineer role. Highlight your experience with data pipelines, cloud platforms, and any relevant technologies like Apache Spark or TensorFlow. We want to see how your skills align with what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about AI and data engineering. Share specific examples of your past work that demonstrate your expertise and how you can contribute to our team at Starcom.
Showcase Your Projects: If you've worked on any cool projects related to MLOps or generative AI, make sure to mention them! We love seeing real-world applications of your skills, so include links or descriptions of your work that highlight your capabilities.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, you'll find all the details about the role and our amazing company culture there!
How to prepare for a job interview at Starcom
✨Know Your Data Engineering Stuff
Make sure you brush up on your data engineering skills, especially around building and maintaining large-scale data pipelines. Be ready to discuss your experience with cloud platforms like GCP, AWS, or Azure, and how you've used them in past projects.
✨Show Off Your Programming Skills
Prepare to demonstrate your programming prowess in Python and SQL. You might be asked to solve a coding problem or explain how you've used these languages in your previous roles, so have some examples ready!
✨Understand the AI Landscape
Since this role involves MLOps and generative AI, make sure you’re familiar with the latest trends and technologies in AI. Be prepared to discuss your experience with tools like Hugging Face, PyTorch, or TensorFlow, and how they relate to data pipelines.
✨Emphasise Collaboration and Leadership
This position requires working closely with ML Engineers and Data Scientists, so highlight your teamwork and leadership skills. Share examples of how you've collaborated on projects and led initiatives that improved data strategies or processes.