At a Glance
- Tasks: Design and build high-performance data processing solutions for voice compliance.
- Company: Join a leading bank with a focus on innovative technology.
- Benefits: Competitive pay, hybrid work model, and opportunities for professional growth.
- Other info: Dynamic team environment with a commitment to diversity and inclusion.
- Why this job: Make a real impact in a mission-critical role using cutting-edge tech.
- Qualifications: Experience with Apache Spark, Kafka, and distributed systems required.
The predicted salary is between 60000 - 80000 € per year.
We are seeking a software engineer specialising in distributed data systems to design and build low-latency, high-volume data processing solutions that underpin regulatory voice compliance assurance across the Bank's strategic data platforms. This is a hands-on software development role, focused on event-driven architectures, streaming pipelines, and scalable data matching engines rather than traditional reporting or BI. You will engineer resilient, production-grade systems capable of processing high-frequency voice metadata and transactional records at scale, ensuring accuracy, determinism and auditability in compliance controls. You will join a team responsible for delivering mission-critical compliance technology across enterprise voice platforms, operating in a highly regulated environment where correctness, performance and reliability are non-negotiable.
What You'll Be Building
- Distributed streaming and batch data processing systems for voice compliance assurance
- Low-latency record matching and reconciliation engines handling billions of events
- Scalable data services operating across Spark, Kafka, Hive and Hadoop
- Production-grade pipelines supporting regulatory evidence, audit and controls
- Foundations for near-real-time compliance signal generation across global voice platforms
Key Responsibilities
- Design and develop high-performance distributed systems for large-scale voice data processing
- Build and optimise Spark-based processing jobs for high-volume and high-frequency workloads
- Engineer Kafka-based streaming pipelines with strong delivery guarantees and low end-to-end latency
- Develop robust data matching and reconciliation logic across heterogeneous voice data sources
- Define and implement scalable data models using Hive and Hadoop ecosystems
- Apply software engineering best practices: version control, code reviews, testing, CI/CD and documentation
- Reverse-engineer and modernise legacy batch or reporting-oriented implementations
- Implement data quality, integrity, lineage and auditability controls required for regulatory assurance
- Partner with platform, vendor and voice engineering teams to align data semantics and system behaviour
- Support synthetic data generation and large-scale performance testing
- Deliver changes through controlled environments in line with enterprise change and release processes
Required Technical Experience
- Core Technologies
- Strong hands-on development experience with Apache Spark
- Proven experience building Kafka-based event streaming systems
- Deep familiarity with Hadoop ecosystems, including Hive
- Advanced SQL for complex, large-scale datasets
- Engineering Capabilities
- Experience designing low-latency, high-throughput data pipelines
- Strong understanding of distributed systems, data partitioning, fault tolerance and scalability
- Ability to distinguish and design appropriately for high-frequency transactional vs high-volume batch workloads
- Experience building ETL / ELT systems as production software, not ad-hoc scripts
- Comfortable working close to infrastructure and platform constraints
Nice to Have
- Exposure to voice or communications platforms (e.g. Cisco, NICE, IPC, Microsoft)
- Experience working in regulated or compliance-driven environments
- Familiarity with Agile delivery models and iterative software development
To apply, please apply with an up-to-date CV. Candidates will ideally show evidence of the above in their CV in order to be considered.
software engineer in London employer: Pontoon
As a Senior Software Engineer at our London-based firm, you will thrive in a dynamic hybrid work environment that champions innovation and collaboration. We offer competitive benefits, a culture of continuous learning, and opportunities for professional growth, all while working on mission-critical compliance technology that makes a real impact in the financial sector. Join us to be part of a diverse team dedicated to excellence in a highly regulated landscape, where your contributions will be valued and recognised.
StudySmarter Expert Advice🤫
We think this is how you could land software engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving distributed systems, Spark, and Kafka. This gives potential employers a taste of what you can do beyond just a CV.
✨Tip Number 3
Prepare for technical interviews by brushing up on your coding skills and understanding system design principles. Practice common interview questions related to data processing and distributed systems to boost your confidence.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who take that extra step to engage with us directly.
We think you need these skills to ace software engineer in London
Some tips for your application 🫡
Tailor Your CV:Make sure your CV is tailored to highlight your experience with distributed data systems, event-driven architectures, and any relevant technologies like Spark and Kafka. We want to see how your skills align with the role!
Showcase Your Projects:Include specific projects where you've designed or built low-latency data processing solutions. We love seeing real-world examples of your work that demonstrate your hands-on experience.
Be Clear and Concise:Keep your application clear and to the point. Use bullet points for your achievements and responsibilities to make it easy for us to read through your experience quickly.
Apply Through Our Website:Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity.
How to prepare for a job interview at Pontoon
✨Know Your Tech Stack
Make sure you’re well-versed in Apache Spark, Kafka, and Hadoop ecosystems. Brush up on your advanced SQL skills too! Be ready to discuss how you've used these technologies in past projects, especially in building low-latency, high-throughput data pipelines.
✨Showcase Your Problem-Solving Skills
Prepare to talk about specific challenges you've faced in distributed systems and how you overcame them. Think of examples where you designed resilient systems or optimised existing ones. This will demonstrate your hands-on experience and ability to think critically under pressure.
✨Understand Compliance Requirements
Since this role is heavily focused on regulatory compliance, make sure you understand the key principles of compliance assurance. Be ready to discuss how you’ve implemented data quality, integrity, and auditability controls in your previous roles.
✨Ask Insightful Questions
Prepare thoughtful questions about the team’s current projects, challenges they face, and their approach to Agile methodologies. This shows your genuine interest in the role and helps you gauge if the company culture aligns with your values.