At a Glance
- Tasks: Design and implement data pipelines, ensuring quality and integrity while solving integration challenges.
- Company: Join a company revolutionising insurance for small and medium-sized businesses with advanced data intelligence.
- Benefits: Enjoy remote work flexibility and the chance to work with cutting-edge technology.
- Why this job: Be part of a team that values collaboration and innovation in a rapidly evolving industry.
- Qualifications: Bachelor’s or Master’s in Computer Science; 5+ years in data engineering with strong SQL and Python skills.
- Other info: Opportunity to learn and grow in a supportive environment with minimal supervision.
The predicted salary is between 43200 - 72000 £ per year.
Location: Remote (United Kingdom)
About The Company: We have partnered with a company that empowers underwriters to serve their insureds more effectively. They are using advanced data intelligence tools to rebuild the way that underwriters share and exchange risk. With a current focus on the small and medium-sized businesses that power our global economy and their niche insurance needs, they leverage granular information on each policy to deliver unprecedented insight into insurance pools, and their speciality portfolio is fully diversified with very low catastrophe, aggregation or systemic risk.
The Role: Designing and implementing data pipelines and models, ensuring data quality and integrity. Solving challenging data integration problems, utilising optimal patterns, frameworks, query techniques, sourcing from vast and varying data sources. Building, maintaining, and optimising our Data Warehouse to support reporting and analytics needs. Collaborating with product managers, business stakeholders and engineers to understand the data needs, representing key data insights in a meaningful way. Staying up-to-date with industry trends and best practices in data modelling, database development, and analytics. Optimising pipelines, frameworks, and systems to facilitate easier development of data artifacts.
You will be successful if you have:
- A Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
- 5+ years of experience in building data pipelines, models and maintaining Data Warehouses for reporting and analytics.
- Strong skills in SQL, Python, problem-solving and data analysis.
- Deep Experience with Snowflake and AWS.
- Deep Experience with dbt.
- Excellent communication and collaboration skills.
- An eagerness to learn and collaborate with others, learn quickly and able to work with little supervision.
If you would like to have a chat about this exciting opportunity, apply below or reach out directly to.
Senior Data Engineer employer: Annapurna
Contact Detail:
Annapurna Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Tip Number 1
Familiarise yourself with the specific technologies mentioned in the job description, such as Snowflake and AWS. Consider building a small project or contributing to an open-source project that uses these tools to demonstrate your hands-on experience.
✨Tip Number 2
Network with professionals in the data engineering field, especially those who work with analytics and data pipelines. Join relevant online communities or attend meetups to gain insights and potentially get referrals for the position.
✨Tip Number 3
Stay updated on industry trends and best practices in data modelling and analytics. Follow thought leaders on platforms like LinkedIn or Twitter, and engage with their content to show your enthusiasm and knowledge in the field.
✨Tip Number 4
Prepare to discuss your previous experiences in building data pipelines and maintaining Data Warehouses during interviews. Be ready to share specific examples of challenges you faced and how you solved them, showcasing your problem-solving skills.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in building data pipelines and maintaining Data Warehouses. Emphasise your skills in SQL, Python, and any experience with Snowflake and AWS.
Craft a Compelling Cover Letter: Write a cover letter that showcases your problem-solving abilities and your eagerness to learn. Mention specific projects or experiences that demonstrate your expertise in data engineering and analytics.
Showcase Collaboration Skills: In your application, provide examples of how you've successfully collaborated with product managers and stakeholders in the past. Highlight your communication skills and ability to represent data insights effectively.
Stay Current with Industry Trends: Mention any recent trends or best practices in data modelling and analytics that you are familiar with. This shows your commitment to staying updated and your passion for the field.
How to prepare for a job interview at Annapurna
✨Showcase Your Technical Skills
Be prepared to discuss your experience with SQL, Python, Snowflake, and AWS in detail. Bring examples of past projects where you built data pipelines or maintained Data Warehouses, as this will demonstrate your hands-on expertise.
✨Understand the Company’s Focus
Research the company’s mission to empower underwriters and their approach to data intelligence. Being able to articulate how your skills can contribute to their goals will show that you are genuinely interested in the role.
✨Prepare for Problem-Solving Questions
Expect to face questions that assess your problem-solving abilities, especially related to data integration challenges. Practice explaining your thought process clearly and logically, as this is crucial for a Senior Data Engineer.
✨Emphasise Collaboration Skills
Since the role involves working closely with product managers and business stakeholders, highlight your communication and collaboration experiences. Share specific examples of how you’ve successfully worked in teams to achieve common goals.