At a Glance
- Tasks: Build scalable data pipelines for cutting-edge genAI products and collaborate with cross-functional teams.
- Company: Join a dynamic tech company focused on innovation and inclusivity.
- Benefits: Enjoy healthcare perks, life insurance, and 25 days holiday plus more.
- Why this job: Make an impact in the exciting world of data engineering and AI.
- Qualifications: Mid-level data engineering experience with strong Python skills and real-time data expertise.
- Other info: Diverse and inclusive workplace with opportunities for mentorship and growth.
The predicted salary is between 30000 - 50000 Β£ per year.
We are looking for a skilled mid-Level Data Engineer with a passion for building reliable and scalable data pipelines to power cutting-edge genAI products. The ideal person would have strong commercial experience in real-time data engineering and cloud technologies, and be able to apply this expertise to business problems to generate value. We currently work in an AWS, Snowflake, dbt, Looker, Python, Kinesis and Airflow stack and are building out our real-time data streaming capabilities using Kafka. You should be comfortable with these or comparable technologies.
As an individual contributor, you will take ownership of well-defined projects, collaborate with senior colleagues on architectural decisions, and contribute to improving data engineering standards, documentation, and team practice. The successful candidate will join our cross-functional development teams and actively participate in our agile delivery process. Our dynamic Data & AI team will also support you, and you will benefit from talking data with our other data engineers, data scientists, and ML and analytics engineers.
Responsibilities- Contribute to our data engineering roadmap.
- Collaborate with senior data engineers on data architecture plans.
- Manage Kafka in production.
- Collaborate with cross-functional teams to develop and implement robust, scalable solutions.
- Support the elicitation and development of technical requirements.
- Build, maintain and improve data pipelines and self-service tooling to provide clean, efficient results.
- Develop automated tests and monitoring to ensure data quality and data pipeline reliability.
- Implement best practices in data governance through documentation, observability and controls.
- Use version control and contribute to code reviews.
- Support the adoption of tools and best practices across the team.
- Mentor junior colleagues where appropriate.
- Essential
- Solid commercial experience in a mid-level data engineering role.
- Excellent production-grade Python skills.
- Previous experience with real-time data streaming platforms such as Kafka/Confluent/Google Cloud Pub/Sub.
- Experience handling and validating real-time data.
- Experience with stream processing frameworks such as Faust/Flink/Kafka Streams, or similar.
- Comfortable with database technologies such as Snowflake/PostgreSQL and NoSQL technologies such as Elasticsearch/MongoDB/Redis or similar.
- Proficient with ELT pipelines and the full data lifecycle, including managing data pipelines over time.
- Good communication skills and the ability to collaborate effectively with engineers, product managers and other internal stakeholders.
- Desirable
- An understanding of JavaScript/TypeScript.
- An understanding of Docker.
- Experience with Terraform.
- Experience with EKS/Kubernetes.
- Experience developing APIs.
Studies have shown that women and people who are disabled, LGBTQ+, neurodiverse or from ethnic minority backgrounds are less likely to apply for jobs unless they meet every single qualification and criteria. We are committed to building a diverse, inclusive, and authentic workplace where everyone can be their best, so if you are excited about this role but your past experience doesnβt align perfectly with every requirement on the Job Description, please apply anyway - you may just be the right candidate for this or other roles in our wider team.
Benefits- Medicash healthcare scheme (reclaim costs for dental, physiotherapy, osteopathy and optical care).
- Life Insurance scheme.
- 25 days holiday +.
Data Engineer in Brighton employer: Humara
Contact Detail:
Humara Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Data Engineer in Brighton
β¨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, attend meetups, and engage with online communities. You never know who might have a lead on that perfect job or can give you insider info about a company.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving AWS, Kafka, or Python. This gives potential employers a tangible look at what you can do and sets you apart from the crowd.
β¨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and soft skills. Practice common data engineering questions and be ready to discuss how you've tackled real-time data challenges in the past. Confidence is key!
β¨Tip Number 4
Don't forget to apply through our website! We love seeing applications directly from candidates who are excited about joining our team. Plus, it shows you're genuinely interested in being part of our dynamic Data & AI crew.
We think you need these skills to ace Data Engineer in Brighton
Some tips for your application π«‘
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the Data Engineer role. Highlight your experience with AWS, Kafka, and Python, as these are key to what weβre looking for!
Craft a Compelling Cover Letter: Use your cover letter to tell us why youβre passionate about data engineering and how your background aligns with our tech stack. This is your chance to show off your personality and enthusiasm!
Showcase Your Projects: If you've worked on relevant projects, whether in a professional or personal capacity, make sure to mention them. We love seeing real-world applications of your skills, especially with data pipelines and streaming technologies.
Apply Through Our Website: We encourage you to apply directly through our website. Itβs the best way to ensure your application gets into the right hands and shows us youβre serious about joining our team!
How to prepare for a job interview at Humara
β¨Know Your Tech Stack
Familiarise yourself with the technologies mentioned in the job description, especially AWS, Snowflake, Kafka, and Python. Be ready to discuss your experience with these tools and how you've used them to solve real-world problems.
β¨Showcase Your Problem-Solving Skills
Prepare examples of how you've tackled data engineering challenges in the past. Think about specific projects where you built or improved data pipelines, and be ready to explain your thought process and the impact of your work.
β¨Collaboration is Key
Since this role involves working with cross-functional teams, be prepared to discuss how you've collaborated with others in previous roles. Highlight any experiences where you worked closely with data scientists or product managers to deliver successful projects.
β¨Ask Insightful Questions
At the end of the interview, have a few thoughtful questions ready. Inquire about the team's current projects, their approach to data governance, or how they measure the success of their data pipelines. This shows your genuine interest in the role and the company.