At a Glance
- Tasks: Develop and optimise big data workflows using Java and modern technologies.
- Company: Global Relay, a leader in enterprise information archiving with a focus on innovation.
- Benefits: Competitive salary, mentoring, diverse culture, and opportunities for career growth.
- Why this job: Make a real impact in a dynamic environment while working with cutting-edge big data tech.
- Qualifications: 5+ years in Java development and experience with ETL/ELT processes.
- Other info: Join a collaborative team that values diversity and encourages personal growth.
The predicted salary is between 43200 - 72000 £ per year.
For over 25 years, Global Relay has set the standard in enterprise information archiving with industry-leading cloud archiving, surveillance, eDiscovery, and analytics solutions. We securely capture and preserve the communications data of the world's most highly regulated firms, giving them greater visibility and control over their information and ensuring compliance with stringent regulations.
Though we offer competitive compensation and benefits and all the other perks one would expect from an established company, we are not your typical technology company. Global Relay is a career-building company. A place for big ideas. New challenges. Groundbreaking innovation. It’s a place where you can genuinely make an impact – and be recognized for it.
We believe great businesses thrive on diversity, inclusion, and the contributions of all employees. To that end, we recruit candidates from different backgrounds and foster a work environment that encourages employees to collaborate and learn from each other, completely free of barriers.
Your Role: Joining the Reporting product line, you would work as a member of a highly focused team. This team specialises in Java-based data engineering, designing and delivering large-scale ELT/ETL workflows on a data lake house platform. You will be working with modern big data technologies to move, transform, and optimise data for high-performance analytics and regulatory reporting. The environment encourages autonomy, problem-solving, and system-level thinking. If you’re passionate about clean, well-tested, performant code and enjoy working on complex data pipelines at scale, you’ll thrive here.
Tech Stack:
- Micro-services Container Platforms (Kubernetes, CRC, Docker)
- Big Data Technologies (Apache Spark, Flink, Hadoop, Airflow, Trino, Iceberg)
- Dependency injection frameworks (Spring)
- Observability (Loki/Grafana)
- Large scale data processing (Kafka)
- CI/CD Build tools (Maven, Git, Jenkins, Ansible)
- NoSQL DBs (Cassandra, Zookeeper, HBase)
Your Responsibilities:
- Develop ETL, ELT and streaming processes using big data frameworks primarily in Java
- Design, implement and provide architectural guidance in deploying microservices as a part of an agile development team
- Write unit and integration tests for your Java code
- Collaborate with testers in development of functional test cases
- Develop deployment systems for Java based systems
- Collaborate with product owners on user story generation and refinement
- Monitor and support the operation of production systems
- Participate in knowledge sharing activities with colleagues
- Pair programming and peer reviews
About you:
Required Experience:
- Minimum 5 years of Java development experience in an Agile environment, building scalable applications and services with a focus on big data solutions and analytics
- 3+ year experience in developing ETL / ELT processes using relevant technologies and tools.
- Experienced in working with data lakes and data warehouse platforms for both batch and streaming data sources.
- ANSI SQL experience or other flavours of SQL
- Experience of unstructured, semi-structured and structured data processing.
- A good understanding of ETL/ELT principles, best practices and patterns used.
- Experienced in some big data technologies such as Hadoop, Spark and Flink
- Experience in web services technologies
- Experience in Test Driven Development
- Experience in CI/CD
Attributes:
- Good communication skills
- Problem Solving
- Self-starter
- Team player
Global Relay is unable to offer visa sponsorship for this position. Candidates must have the right to work in the UK at the time of application.
What you can expect:
At Global Relay, there’s no ceiling to what you can achieve. It’s the land of opportunity for the energetic, the intelligent, the driven. You’ll receive the mentoring, coaching, and support you need to reach your career goals. You’ll be part of a culture that breeds creativity and rewards perseverance and hard work. And you’ll be working alongside smart, talented individuals from diverse backgrounds, with complementary knowledge and skills.
Global Relay is an equal-opportunity employer committed to diversity, equity, and inclusion. We seek to ensure reasonable adjustments, accommodations, and personal time are tailored to meet the unique needs of every individual.
Senior Big Data Engineer in London employer: Global Relay
Contact Detail:
Global Relay Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Big Data Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with current employees at Global Relay. A personal introduction can make all the difference when it comes to landing that interview.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your big data projects and Java expertise. This gives you a chance to demonstrate your capabilities beyond just a CV.
✨Tip Number 3
Prepare for the technical interview by brushing up on your ETL/ELT processes and big data technologies. Practice coding challenges and be ready to discuss your problem-solving approach during the interview.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining the Global Relay team.
We think you need these skills to ace Senior Big Data Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Senior Big Data Engineer role. Highlight your Java development experience and any big data technologies you've worked with, as this will catch our eye!
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about big data and how you can contribute to our team. Share specific examples of your past projects or challenges you've overcome in the field.
Showcase Your Problem-Solving Skills: In your application, don’t just list your technical skills; demonstrate your problem-solving abilities. We love candidates who can think critically and tackle complex data challenges head-on.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team!
How to prepare for a job interview at Global Relay
✨Know Your Tech Stack
Familiarise yourself with the specific technologies mentioned in the job description, like Apache Spark, Flink, and Kubernetes. Be ready to discuss your experience with these tools and how you've used them in past projects.
✨Showcase Your Problem-Solving Skills
Prepare examples of complex data challenges you've faced and how you approached solving them. Global Relay values autonomy and problem-solving, so demonstrating your thought process will impress the interviewers.
✨Emphasise Collaboration
Since the role involves working closely with product owners and testers, be prepared to talk about your experience in team settings. Share instances where you collaborated effectively to achieve a common goal, highlighting your communication skills.
✨Prepare for Technical Questions
Expect technical questions related to ETL/ELT processes and Java development. Brush up on your knowledge of best practices and patterns in big data solutions, and be ready to write some code or solve problems on the spot.