We’re Synechron, a global consultancy laser-focused on accelerating digital initiatives in financial services. With over 14,500 members of staff across 17 countries and a unique mix of end-to-end digital, business and technology services, we help clients solve complex challenges with modern and innovative solutions. We\’re big enough to be taken seriously, yet small enough to operate with an agile, open, relationship-driven approach. Our clients come to us with problems that need genuine thought, intelligence and knowledge; we\’re not just putting bodies on seats. Role Overview: Develop and maintain core platform services using Java, interacting heavily with Databricks APIs to manage workspaces, clusters, and job execution. Design, build, and optimise data ingestion pipelines primarily using Scala and Spark on Databricks. Experience needed: Advanced proficiency in Scala with Spark: Hands-on experience writing robust and performant Scala applications on Spark, with a strong understanding of distributed computing and data processing concepts. Advanced knowledge of Databricks: Deep understanding of Databricks workspaces, clusters (including different types and configurations), workflows, and job monitoring. Experience with Databricks administration and interacting with the Databricks APIs is essential. Intermediate proficiency in Java: Ability to develop, test, and maintain Java applications. Familiarity with microservice architecture and RESTful API design/consumption. Intermediate experience with REST APIs: Proven experience interacting with and consuming RESTful APIs, specifically including extensive work with Databricks APIs. Further Details: London – Hybrid (Three days in the office) Rate – 700 – 800 (DOE) Long-term engagement
Contact Detail:
Synechron Recruiting Team