At a Glance
- Tasks: Build and optimise data pipelines in a cloud-native environment.
- Company: Join a market-leading financial services client in Edinburgh.
- Benefits: Competitive salary, 5% bonus, and hybrid working model.
- Other info: 12-month contract with potential for extension and excellent career growth.
- Why this job: Make an impact in AI and data engineering while growing your skills.
- Qualifications: Strong programming skills in Java, Python, or Scala required.
The predicted salary is between 34500 - 34500 £ per year.
Location: Edinburgh
Hybrid role: working 2 days a week in the office
Salary: £33,000 to £39,000 pa plus 5% bonus
Contract: 12-month fixed term contract
Our market-leading financial services client is seeking a motivated, detail-focused Junior Data Engineer to join the Business Transaction Banking division. This role involves building, optimising, and scaling data pipelines and large-scale data processing workloads in a cloud-native environment. You will work with modern distributed data systems, contribute to data platform modernisation, and support large-scale ingestion, transformation, and analytics workloads.
The role offers the opportunity to influence platform-wide data patterns, contribute to cloud modernisation and be part of a multidisciplinary innovation lab focused on AI, data engineering and next-gen platform engineering.
Key Responsibilities
- Design and deliver end-to-end data pipelines on cloud platforms (Google Cloud Platform preferred).
- Building scalable data workflows using distributed technologies such as Spark, Flink, Storm, or similar.
- Develop robust processing and migration pipelines, including support for legacy DataStage decommissioning and modernisation.
- Work with a variety of database technologies including relational, NoSQL, MPP and columnar stores (BigQuery, Redshift, Azure SQLDW, HBase, MongoDB).
- Build optimised, scalable data models.
- Apply performance tuning and optimisation across storage.
- Ensure secure handling of data including authentication, authorisation, encryption.
- Implement monitoring, alerting for large-scale distributed data workloads.
- Use orchestration tools such as Cloud Composer, Airflow or equivalent to operationalise pipelines.
- Collaborate with engineers, architects and SMEs to deliver stable, high-quality data products.
Skills and Experience
- Strong programming skills in Java, Python or Scala.
- Experience with cloud data services (GCP preferred; Azure/AWS acceptable).
- Experience with distributed data processing frameworks such as Spark (Core/SQL/Streaming), Flink, or Storm.
- Understanding of designing scalable data models for varied access patterns.
- Working on large scale big data solutions and understanding of DevOps for data systems.
- Analytical mindset and uses a methodological approach to complete tasks.
- Resilient, confident, professional, and able to work effectively with multiple teams.
You will be a valued member of our Adecco Emerging Talent function working onsite with a market-leading organisation, initially, the assignment is 12 months with scope for extension in the future, so you need to be someone with a permanent mindset!
If you have the experience and desire to work for a well-respected organisation offering personal and professional support, growth and development, then you could be a perfect fit for the team and we want to hear from you - APPLY NOW.
Please be advised if you haven't heard from us within 48 hours then unfortunately your application has not been successful on this occasion, we may however keep your details on file for any suitable future vacancies and contact you accordingly.
Adecco Emerging talent is an employment consultancy and operates as an equal opportunities employer.
Junior Data Engineer in Edinburgh employer: Pontoon
Contact Detail:
Pontoon Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Junior Data Engineer in Edinburgh
✨Network Like a Pro
Get out there and connect with people in the industry! Attend meetups, webinars, or even just grab a coffee with someone who works in data engineering. Building relationships can open doors that applications alone can't.
✨Show Off Your Skills
Don’t just talk about your experience; demonstrate it! Create a portfolio showcasing your projects, especially those involving cloud platforms or distributed data systems. This will give you an edge and show potential employers what you can really do.
✨Ace the Interview
Prepare for technical interviews by brushing up on your programming skills in Java, Python, or Scala. Practice common data engineering problems and be ready to discuss your thought process. Confidence is key, so believe in your abilities!
✨Apply Through Our Website
We want to hear from you! Make sure to apply through our website for the best chance of landing that Junior Data Engineer role. It’s the quickest way for us to see your application and get you in the door.
We think you need these skills to ace Junior Data Engineer in Edinburgh
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Junior Data Engineer role. Highlight relevant skills like programming in Java, Python, or Scala, and any experience with cloud platforms like GCP. We want to see how your background fits with what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our team. Be sure to mention your experience with distributed data processing frameworks like Spark or Flink.
Showcase Your Projects: If you've worked on any projects related to data pipelines or cloud services, make sure to include them! We love seeing practical examples of your work, especially if they demonstrate your analytical mindset and problem-solving skills.
Apply Through Our Website: We encourage you to apply through our website for the best chance of success. It helps us keep track of applications and ensures you’re considered for the role. Plus, we’re excited to hear from you directly!
How to prepare for a job interview at Pontoon
✨Know Your Tech Stack
Make sure you brush up on your programming skills in Java, Python, or Scala. Familiarise yourself with cloud data services, especially Google Cloud Platform, as well as distributed data processing frameworks like Spark and Flink. Being able to discuss these technologies confidently will show that you're ready for the role.
✨Understand Data Pipelines
Get a solid grasp of what end-to-end data pipelines look like, especially in a cloud-native environment. Be prepared to talk about how you would design and deliver scalable data workflows, and think about examples from your past experiences that demonstrate your ability to optimise and build robust processing pipelines.
✨Show Your Analytical Mindset
During the interview, highlight your analytical mindset and methodological approach to problem-solving. Prepare to discuss specific challenges you've faced in previous projects and how you tackled them. This will help the interviewers see your resilience and ability to work effectively under pressure.
✨Collaborate and Communicate
Since this role involves working with multiple teams, be ready to showcase your collaboration skills. Think of examples where you've successfully worked with engineers, architects, or subject matter experts. Good communication is key, so practice articulating your thoughts clearly and professionally.