At a Glance
- Tasks: Design and build data pipelines for advanced AI agents in customer experience.
- Company: Join Zendesk, a leader in customer service technology with a focus on innovation.
- Benefits: Enjoy a hybrid work model, competitive salary, and opportunities for professional growth.
- Why this job: Shape the future of AI-driven insights and make a real impact on user experiences.
- Qualifications: 3+ years in data engineering, proficiency in SQL and Python, and strong collaboration skills.
- Other info: Dynamic team environment with a commitment to diversity and inclusion.
The predicted salary is between 30000 - 50000 £ per year.
Job Description
We, at Zendesk, are on a mission to build the most advanced AI agents in CX, and the Insights team, part of AI Agents Advanced, is focused on one of the product’s most critical pillars: delivering intelligent analytics, contextual insights, and decision-support tools that empower users to take meaningful action.
As a Data Engineer on this team, you’ll be central to designing and building the data pipelines, services, and infrastructure that power our product’s AI-driven insights. You’ll work at the intersection of product engineering, analytics, and AI — helping to create robust, reliable, and scalable data systems that support real-time and historical insights for our users.
You will collaborate closely with data scientists, analysts, backend and frontend engineers, and product managers to design data models, define integration patterns, and optimize data workflows. This role is ideal for someone who loves working with structured and unstructured data, thrives on solving complex data challenges, and wants to build the foundation for intelligent, customer-facing features.
Your work will directly shape how our customers access, interpret, and benefit from data-rich AI agents — enabling them to act with confidence and clarity.
What You Bring to the Role
- 3+ years of experience designing and implementing data pipelines and systems in a production environment.
- Proficiency with SQL, DBT and at least one general-purpose programming language such as Python.
- Experience with batch and stream processing frameworks (e.g., Apache Flink, Apache Spark, Apache Beam, or equivalent).
- Experience with orchestration tools (e.g., Apache Airflow)
- Familiarity with event-driven data architectures and messaging systems like Pub/Sub, Kafka, or similar.
- Strong understanding of data modeling and database design, both relational and NoSQL.
- Experience building and maintaining ETL/ELT workflows that are scalable, testable, and observable.
- A product mindset — you care about the quality, usability, and impact of the data you work with.
- Strong communication and collaboration skills — you enjoy solving problems with others and proactively share your expertise.
- Curiosity, humility, and a drive for continuous learning — you seek feedback and growth, and help others do the same.
A Big Plus If You
- Have experience working with cloud-based data platforms (GCP or AWS preferred).
- Are familiar with Looker or other analytics/BI tools.
- Have worked with feature stores or supported ML workflows with production-ready data pipelines.
- Understand CI/CD best practices and infrastructure-as-code tools like Terraform.
- Are comfortable navigating large-scale distributed systems and production debugging.
How We Measure Success in This Role
- You deliver clean, scalable, and reliable data solutions that enable product and AI teams to build on top of.
- You write well-tested, well-documented code and continuously improve the performance and reliability of our data systems.
- You actively participate in architecture discussions and help define data standards, schemas, and contracts.
- You collaborate closely across disciplines and contribute meaningfully to planning, reviews, and team goals.
- You grow over time — increasing your technical scope, deepening your understanding of the product, and supporting others through knowledge sharing.
Our Tech Stack
- Languages: Python, TypeScript, SQL
- Data Engineering: dbt, Airflow, BigQuery, Kafka, Pub/Sub, Astronomer
- Storage: BigQuery, MongoDB, Snowflake
- Infrastructure: GCP, AWS, Kubernetes, Terraform, ArgoCD
- Observability: Sentry, Datadog
- BI: Looker
Interview Process
We aim to make our hiring process clear and transparent:
- Intro chat with Talent Partner – 30 minutes
- Interview with Hiring Manager – 45 minutes
- Take-home assignment
- Technical interview (task follow-up & role-related) with two engineers – 60 minutes
- Bar raiser interview with Hiring Manager and Senior Leadership – 45 minutes
Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration – while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager.
The intelligent heart of customer experience
Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love.
Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week.
As part of our commitment to fairness and transparency, we inform all applicants that artificial intelligence (AI) or automated decision systems may be used to screen or evaluate applications for this position, in accordance with Company guidelines and applicable law.
Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here.
Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request.
#J-18808-Ljbffr
Data Engineer II - AI Agents employer: Zendesk
Contact Detail:
Zendesk Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer II - AI Agents
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at local meetups. A friendly chat can lead to opportunities that aren’t even advertised yet.
✨Tip Number 2
Prepare for those interviews! Research the company and practice common questions. We want you to feel confident and ready to showcase your skills, especially in data engineering.
✨Tip Number 3
Show off your projects! Whether it’s on GitHub or a personal website, having a portfolio of your work can really set you apart. It’s a great way to demonstrate your experience with data pipelines and analytics.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search.
We think you need these skills to ace Data Engineer II - AI Agents
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter to highlight your experience with data pipelines, SQL, and any relevant programming languages. We want to see how your skills align with the role of Data Engineer II!
Showcase Your Projects: Include examples of past projects where you've designed and implemented data systems. We love seeing real-world applications of your skills, especially if they relate to AI or analytics!
Be Clear and Concise: When writing your application, keep it straightforward. Use clear language to describe your experience and achievements. We appreciate a well-structured application that gets straight to the point!
Apply Through Our Website: Don’t forget to submit your application through our website! It’s the best way for us to receive your details and ensure you’re considered for the role. We can’t wait to hear from you!
How to prepare for a job interview at Zendesk
✨Know Your Tech Stack
Familiarise yourself with the technologies mentioned in the job description, like Python, SQL, and Apache Airflow. Be ready to discuss your experience with these tools and how you've used them to build data pipelines or solve complex data challenges.
✨Showcase Your Collaboration Skills
Since this role involves working closely with data scientists, analysts, and engineers, prepare examples of how you've successfully collaborated in the past. Highlight your communication skills and how you’ve contributed to team goals.
✨Prepare for Technical Questions
Expect technical questions related to data modelling, ETL/ELT workflows, and cloud platforms. Brush up on your knowledge of event-driven architectures and be ready to explain your thought process when tackling data-related problems.
✨Demonstrate a Product Mindset
Be prepared to discuss how you ensure the quality and usability of the data you work with. Share examples of how your work has had a direct impact on user experience or product features, showing that you care about the end result.