Data Engineer

Data Engineer

Full-Time 36000 - 60000 £ / year (est.) No home office possible
Y

At a Glance

  • Tasks: Join our team to develop and maintain data pipelines using Azure services and Spark.
  • Company: Be part of a cutting-edge media company focused on innovative data solutions.
  • Benefits: Enjoy a full-time role with opportunities for growth and learning in a dynamic environment.
  • Why this job: Work with passionate professionals and contribute to impactful data projects in the media industry.
  • Qualifications: Proficiency in Spark, Scala, and Azure; experience with ETL processes and SQL required.
  • Other info: Ideal for tech-savvy individuals eager to learn and grow in a collaborative setting.

The predicted salary is between 36000 - 60000 £ per year.

Are you looking for a skilled Data Engineer to join a dynamic team dedicated to building and maintaining a cutting-edge media data lake on Microsoft Azure? This role focuses on developing and supporting data pipelines within a medallion architecture, utilizing Spark and Scala to process and transform large volumes of media data. The successful candidate will be passionate about data, eager to learn, and contribute to a high-performing engineering team.

WHAT YOU WILL DO

  • Develop, test, and deploy data ingestion, transformation, and processing pipelines using Azure services (Azure Data Factory, Azure Data Lake Storage).
  • Write efficient and maintainable code in Spark and Scala for data manipulation and analysis.
  • Contribute to the implementation and maintenance of the medallion architecture (Bronze, Silver, Gold layers).
  • Collaborate with senior engineers, architects, and analysts to understand data requirements and implement solutions.
  • Monitor data pipelines, troubleshoot issues, and implement optimizations for performance and reliability.
  • Implement data quality checks and ensure data integrity across the data ecosystem.
  • Participate in code reviews and contribute to team best practices.
  • Document data pipelines, processes, and technical specifications.

WHAT YOU WILL NEED

  • Proficiency in Apache Spark and Scala programming.
  • Experience with cloud platforms, preferably Microsoft Azure (Azure Data Factory, ADLS Gen2, Azure Synapse Analytics).
  • Understanding of ETL/ELT processes and data warehousing concepts.
  • Familiarity with medallion data lake architecture principles.
  • Experience with SQL and database technologies.
  • Experience with version control systems like Git.
  • Familiarity with media or advertising data is a plus.
  • Good understanding of CI/CD practices.

Seniority level

  • Mid-Senior level

Employment type

  • Full-time

Job function

  • Information Technology

Industries

  • IT System Data Services and Technology
  • Information and Media

This job posting is currently active.

#J-18808-Ljbffr

Data Engineer employer: YunoJuno

Join a forward-thinking company that values innovation and collaboration, where as a Data Engineer, you will have the opportunity to work with cutting-edge technologies in a vibrant team environment. Our commitment to employee growth is reflected in our continuous learning culture, offering ample opportunities for professional development and career advancement. Located in a thriving tech hub, we provide a dynamic workplace that fosters creativity and encourages a healthy work-life balance, making us an excellent employer for those seeking meaningful and rewarding employment.
Y

Contact Detail:

YunoJuno Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer

✨Tip Number 1

Familiarise yourself with the specific tools and technologies mentioned in the job description, such as Azure Data Factory and Spark. Having hands-on experience or projects showcasing your skills with these tools can set you apart during the interview process.

✨Tip Number 2

Engage with the data engineering community online. Join forums, attend webinars, or participate in discussions related to Azure and data pipelines. This not only helps you learn but also shows your passion for the field when you discuss your insights during interviews.

✨Tip Number 3

Prepare to discuss real-world scenarios where you've implemented data quality checks or optimised data pipelines. Being able to share specific examples will demonstrate your practical knowledge and problem-solving abilities to the interviewers.

✨Tip Number 4

Network with current employees at StudySmarter or similar companies. Reach out on LinkedIn to learn more about their experiences and gather insights that could help you tailor your approach during the application and interview stages.

We think you need these skills to ace Data Engineer

Proficiency in Apache Spark
Strong programming skills in Scala
Experience with Microsoft Azure services
Knowledge of Azure Data Factory
Familiarity with Azure Data Lake Storage (ADLS Gen2)
Understanding of ETL/ELT processes
Data warehousing concepts
Experience with SQL and database technologies
Version control using Git
Understanding of medallion architecture principles
CI/CD practices
Problem-solving skills
Ability to monitor and troubleshoot data pipelines
Documentation skills for technical specifications
Collaboration and communication skills

Some tips for your application 🫡

Tailor Your CV: Make sure your CV highlights your experience with Apache Spark, Scala, and Microsoft Azure. Include specific projects where you've developed data pipelines or worked with medallion architecture to demonstrate your relevant skills.

Craft a Compelling Cover Letter: In your cover letter, express your passion for data engineering and your eagerness to contribute to a high-performing team. Mention any relevant experience with ETL processes and how you can add value to the company's data ecosystem.

Showcase Your Technical Skills: When detailing your technical skills, be specific about your proficiency in Azure services, SQL, and version control systems like Git. Providing examples of how you've used these technologies in past roles will strengthen your application.

Highlight Collaboration Experience: Since the role involves collaboration with senior engineers and analysts, include examples of past teamwork experiences. Emphasise your ability to communicate effectively and work towards common goals in a technical environment.

How to prepare for a job interview at YunoJuno

✨Showcase Your Technical Skills

Be prepared to discuss your proficiency in Apache Spark and Scala. Bring examples of past projects where you've developed data pipelines or worked with Azure services, as this will demonstrate your hands-on experience.

✨Understand the Medallion Architecture

Familiarise yourself with the medallion architecture principles, including the Bronze, Silver, and Gold layers. Be ready to explain how you would implement and maintain these layers in a data lake environment.

✨Prepare for Problem-Solving Questions

Expect questions that assess your troubleshooting skills. Think of scenarios where you've monitored data pipelines and optimised performance, and be ready to discuss the steps you took to resolve issues.

✨Emphasise Collaboration and Communication

Highlight your ability to work with senior engineers and analysts. Share examples of how you've collaborated on projects, understood data requirements, and contributed to team best practices, as this role involves significant teamwork.

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

Y
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>