At a Glance
- Tasks: Build data pipelines and integrate AI for personalised language learning.
- Company: Join Babbel, a leader in innovative language education technology.
- Benefits: Enjoy 30 vacation days, flexible hours, and remote work options.
- Why this job: Be part of a dynamic team revolutionising learner experiences with cutting-edge tech.
- Qualifications: Expertise in Python and experience with data platforms like Databricks required.
- Other info: We celebrate diversity and encourage applications from all backgrounds.
The predicted salary is between 36000 - 60000 £ per year.
We are looking for a Data Engineer (full-time) to join our Dynamic Content team in Berlin .
About the team
The Dynamic Content Team at Babbel is on a mission to revolutionize the learner experience through world-class engineering, seamless content delivery, and cutting-edge AI. We believe in experimentation and continuous improvement, empowering our users with intelligent, personalized, and engaging language learning journeys.
Role responsibilities
- Engineer robust data pipelines using Python, Databricks, and Airflow to support dynamic content delivery and personalization at scale.
- Integrate and operationalize AI capabilities (with OpenAI and other LLMs), enabling smarter content recommendations and automation.
- Design and optimize data models for content management, learner analytics, and AI experimentation.
- Ensure data quality, reliability, and security across all stages of the data lifecycle.
- Automate platform operations and deployment workflows to enhance efficiency and repeatability.
- Collaborate with content creators, product managers, and data scientists to deliver impactful learner experiences.
- Champion best practices in data engineering, code quality, and documentation.
- Participate in knowledge sharing, learning sessions, and team rituals to foster team growth.
Qualifications
- Expertise in Python for building robust, maintainable data solutions.
- Hands-on experience with Data Lake and Lakehouse platforms (preferably Databricks), including Apache Spark and open table formats like Delta Lake or Iceberg.
- Strong proficiency with Airflow for orchestrating complex data workflows.
- Experience deploying and operationalizing AI/ML models in production, including work with OpenAI APIs or similar LLMs.
- Solid understanding of cloud data architectures, especially on AWS.
- High standards in code, data quality, and engineering practices.
- Excellent communication skills in English, both written and spoken.
Nice to Have
- Experience with content management systems or personalization engines.
- Data engineering experience in a product or content-driven environment.
- Familiarity with dbt, Terraform, and CI/CD pipelines.
- A passion for language learning or educational technology.
- Experience working in fast-paced, cross-functional teams.
Perks at Babbel
- 30 vacation days, a 3-month Sabbatical, and family/life counseling.
- Flexible working hours and remote options, including Jobbatical (up to 3 months inside the EU) or working from our fully equipped office.
- Internal learning opportunities and a yearly development budget.
- Free access to Babbel for language learning.
- Mobility benefits and a discounted Urban Sports Club membership.
- Participation in employee communities and social events.
Diversity and Inclusion
We welcome applications from everyone, especially those underrepresented in tech. We value skills, qualifications, and alignment with our values. Please include your pronouns and inform us of any disabilities or needs to assist in the application process.
How to Apply
Interested? Please submit your resume and cover letter through our application portal.
Address: Liftoff 23-35 2nd Floor, Great Titchfield St, London,…
#J-18808-Ljbffr
Data Engineer (all genders) employer: TechBrains
Contact Detail:
TechBrains Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (all genders)
✨Tip Number 1
Familiarise yourself with the specific technologies mentioned in the job description, such as Python, Databricks, and Airflow. Having hands-on experience or projects that showcase your skills in these areas will give you a significant edge during interviews.
✨Tip Number 2
Engage with the data engineering community online. Join forums, attend webinars, or participate in discussions related to AI and data pipelines. This not only helps you stay updated but also allows you to network with professionals who might provide insights or referrals.
✨Tip Number 3
Prepare to discuss your previous experiences with data quality and security. Be ready to share specific examples of how you've ensured data integrity in past projects, as this is a key responsibility for the role.
✨Tip Number 4
Show your passion for language learning and educational technology. Research Babbel's mission and values, and be prepared to articulate how your interests align with their goals during the interview process.
We think you need these skills to ace Data Engineer (all genders)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your expertise in Python, data engineering, and any relevant experience with Databricks and Airflow. Use keywords from the job description to demonstrate your fit for the role.
Craft a Compelling Cover Letter: In your cover letter, express your passion for language learning and educational technology. Mention specific projects or experiences that showcase your ability to engineer robust data pipelines and work with AI capabilities.
Showcase Relevant Skills: Clearly outline your hands-on experience with cloud data architectures, particularly AWS, and your proficiency in orchestrating complex data workflows using Airflow. This will help you stand out as a strong candidate.
Highlight Collaboration Experience: Since the role involves collaboration with content creators and product managers, include examples of how you've successfully worked in cross-functional teams. This demonstrates your ability to contribute to impactful learner experiences.
How to prepare for a job interview at TechBrains
✨Showcase Your Python Skills
As a Data Engineer, your expertise in Python is crucial. Be prepared to discuss specific projects where you've built robust data solutions using Python. Highlight any challenges you faced and how you overcame them.
✨Familiarise Yourself with Databricks and Airflow
Since the role involves working with Databricks and Airflow, make sure you understand their functionalities. Be ready to explain how you've used these tools in past projects, particularly in orchestrating complex data workflows.
✨Discuss AI/ML Experience
The position requires operationalising AI capabilities. Prepare to talk about your experience with AI/ML models, especially if you've worked with OpenAI APIs or similar technologies. Share examples of how you've integrated these into data pipelines.
✨Emphasise Collaboration Skills
Collaboration is key in this role. Be ready to discuss how you've worked with cross-functional teams, such as content creators and product managers, to deliver impactful projects. Highlight your communication skills and any successful team experiences.