At a Glance
- Tasks: Design and build scalable data pipelines for real-time and batch processing.
- Company: Join a leading global technology consultancy known for learning and career growth.
- Benefits: Hybrid working, access to enterprise-scale projects, and a strong benefits package.
- Other info: Opportunities for clear progression in a structured environment.
- Why this job: Work on cutting-edge data solutions that drive analytics and AI decision-making.
- Qualifications: Experience in data engineering with strong programming skills in Java or Python.
The predicted salary is between 50000 - 70000 £ per year.
Excellent opportunity to join a leading global technology consultancy with a reputation for learning and career progression. My client is hiring multiple Data Engineers across mid to senior levels and represents an opportunity to work on large-scale, enterprise data platforms delivering solutions across both public and private sector clients.
The role focuses on designing and building scalable data pipelines, supporting real-time and batch data processing, and enabling analytics and AI-driven decision-making. These roles are available across different experience levels, from engineers with a few years of experience to more senior specialists contributing to complex distributed systems.
What’s on Offer
- Hybrid working (3 days per week onsite in Newcastle)
- Access to enterprise-scale projects and modern cloud technologies
- Clear progression pathways within a large, structured environment
- Opportunity to work on real-time streaming and large-scale data systems
- Strong learning and development environment
- Benefits pack
What You Need
- Experience in data engineering or large-scale data solutions (minimum 3 years)
- Strong programming skills in Java (preferred) or Python
- Experience with data processing tools such as Kafka, Spark, or Flink
- Experience building ETL/ELT or streaming data pipelines
- Understanding of distributed systems and cloud platforms (AWS, Azure or GCP)
- Familiarity with CI/CD, Infrastructure as Code (e.g. Terraform), and containerisation (Docker/Kubernetes)
- Knowledge of data modelling, governance, and data quality principles
- Ability to work in agile teams and collaborate with technical and non-technical stakeholders
Additional Requirement
Must have 5+ years continuous UK address history due to security clearance requirements.
Data Engineer - Multiple roles - Newcastle in Newcastle upon Tyne employer: ANSON MCCADE
Contact Detail:
ANSON MCCADE Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer - Multiple roles - Newcastle in Newcastle upon Tyne
✨Tip Number 1
Network like a pro! Reach out to your connections in the tech industry, especially those who work in data engineering. A friendly chat can lead to insider info about job openings or even referrals that could give you an edge.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving data pipelines and cloud technologies. This gives potential employers a tangible look at what you can do, making you stand out from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and soft skills. Practice common data engineering questions and be ready to discuss your experience with tools like Kafka and Spark. Confidence is key!
✨Tip Number 4
Don’t forget to apply through our website! We’ve got multiple roles available, and applying directly can sometimes speed up the process. Plus, it shows you’re genuinely interested in joining our team!
We think you need these skills to ace Data Engineer - Multiple roles - Newcastle in Newcastle upon Tyne
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with data engineering, programming skills in Java or Python, and any relevant tools like Kafka or Spark. We want to see how your background fits with what we’re looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your skills align with our needs. Don’t forget to mention your experience with cloud platforms and distributed systems – we love that stuff!
Showcase Your Projects: If you’ve worked on any cool projects, make sure to mention them! Whether it’s building ETL pipelines or working with real-time data systems, we want to know what you’ve done. This helps us see your practical experience and problem-solving skills.
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s super easy, and you’ll be able to keep track of your application status. Plus, we love seeing applications come directly from our site!
How to prepare for a job interview at ANSON MCCADE
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, especially Java or Python, and tools like Kafka, Spark, or Flink. Brush up on your knowledge of cloud platforms like AWS, Azure, or GCP, as these will likely come up during technical discussions.
✨Showcase Your Projects
Prepare to discuss specific projects where you've designed and built data pipelines or worked with large-scale data solutions. Highlight your role, the challenges you faced, and how you overcame them. This will demonstrate your hands-on experience and problem-solving skills.
✨Understand the Company Culture
Research the consultancy’s values and work culture. Be ready to explain how you align with their focus on learning and career progression. Showing that you’re a good fit for their environment can set you apart from other candidates.
✨Ask Insightful Questions
Prepare thoughtful questions about the team dynamics, project methodologies, and opportunities for professional development. This not only shows your interest in the role but also helps you gauge if the company is the right fit for you.