Scala/Spark Data Engineers
Scala/Spark Data Engineers

Scala/Spark Data Engineers

Northampton Full-Time 36000 - 60000 £ / year (est.) No home office possible
F

At a Glance

  • Tasks: Join our team to develop and enhance an AWS-based capital reporting platform using Scala and Spark.
  • Company: Be part of a dynamic company focused on innovative financial solutions and compliance with Basel III standards.
  • Benefits: Enjoy flexible work arrangements, competitive pay, and opportunities for professional growth.
  • Why this job: This role offers hands-on experience with cutting-edge technology in a collaborative environment that values your input.
  • Qualifications: Strong knowledge of Scala and Spark is essential; experience with Hadoop is a plus.
  • Other info: Ideal for tech-savvy individuals eager to make an impact in the finance sector.

The predicted salary is between 36000 - 60000 £ per year.

Context

The client has Basel III Capital Reporting deadlines and requires assistance to support the development of its capital and RWA reporting AWS-based platform; that processes data from upstream systems, executes QA models, and performs aggregation and reporting. Outputs are produced out of its Hadoop ecosystem, and engineers are required with an excellent understanding of Scala and Spark where…

WHJS1_UKTJ

Scala/Spark Data Engineers employer: FBI &TMT

Join a forward-thinking company that values innovation and collaboration, where Scala/Spark Data Engineers play a crucial role in shaping our AWS-based capital reporting platform. With a strong commitment to employee growth, we offer extensive training opportunities and a supportive work culture that encourages creativity and teamwork. Located in a vibrant area, our team enjoys a dynamic environment that fosters both professional development and personal well-being.
F

Contact Detail:

FBI &TMT Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Scala/Spark Data Engineers

✨Tip Number 1

Make sure to brush up on your Scala and Spark skills. Since the role requires an excellent understanding of these technologies, consider working on personal projects or contributing to open-source projects that utilize them.

✨Tip Number 2

Familiarize yourself with AWS services, especially those related to data processing and storage. Understanding how to leverage AWS for capital and RWA reporting will give you a significant edge in the interview.

✨Tip Number 3

Gain experience with Hadoop ecosystems if you haven't already. Knowing how to work within this environment will be crucial for processing data and executing QA models effectively.

✨Tip Number 4

Network with professionals in the field. Attend meetups or webinars focused on Scala, Spark, and data engineering to connect with others and learn about industry trends that could be beneficial during your application process.

We think you need these skills to ace Scala/Spark Data Engineers

Scala Programming
Apache Spark
AWS Services
Hadoop Ecosystem
Data Processing
Quality Assurance Models
Data Aggregation
Reporting Skills
ETL Processes
Performance Tuning
Big Data Technologies
Problem-Solving Skills
Collaboration and Teamwork
Attention to Detail

Some tips for your application 🫡

Understand the Job Requirements: Make sure to thoroughly read the job description for the Scala/Spark Data Engineer position. Pay attention to the specific skills required, such as expertise in Scala and Spark, as well as experience with AWS and Hadoop ecosystems.

Highlight Relevant Experience: In your CV and cover letter, emphasize your previous experience with Scala, Spark, and any relevant projects involving AWS and data processing. Use specific examples to demonstrate your skills and how they relate to the job.

Tailor Your Application: Customize your application materials to reflect the language and requirements mentioned in the job description. This shows that you have a clear understanding of what the company is looking for and how you can contribute.

Proofread Your Documents: Before submitting your application, carefully proofread your CV and cover letter for any grammatical or typographical errors. A polished application reflects your attention to detail and professionalism.

How to prepare for a job interview at FBI &TMT

✨Showcase Your Scala and Spark Expertise

Be prepared to discuss your experience with Scala and Spark in detail. Highlight specific projects where you've utilized these technologies, focusing on how you solved problems and optimized performance.

✨Understand the Hadoop Ecosystem

Familiarize yourself with the components of the Hadoop ecosystem, as this role involves working with it. Be ready to explain how you've interacted with Hadoop in past projects and how it integrates with data processing.

✨Discuss Capital Reporting Experience

If you have experience with capital and RWA reporting, make sure to bring it up. Discuss any relevant projects or challenges you've faced, and how you ensured compliance with reporting deadlines.

✨Prepare for Technical Questions

Expect technical questions that test your knowledge of data processing, QA models, and AWS services. Brush up on key concepts and be ready to solve problems on the spot, demonstrating your analytical skills.

Scala/Spark Data Engineers
FBI &TMT
F
  • Scala/Spark Data Engineers

    Northampton
    Full-Time
    36000 - 60000 £ / year (est.)

    Application deadline: 2027-02-26

  • F

    FBI &TMT

Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>