At a Glance
- Tasks: Develop and deliver advanced tech products focused on data and analytics in a dynamic environment.
- Company: Join JPMorgan Chase's innovative Chief Data & Analytics Office.
- Benefits: Competitive salary, health benefits, and opportunities for professional growth.
- Other info: Collaborative culture that values diversity, opportunity, and respect.
- Why this job: Make a real impact by tackling complex cloud data challenges with cutting-edge technology.
- Qualifications: 10+ years of experience in software engineering and AWS Databricks platform administration.
The predicted salary is between 80000 - 100000 £ per year.
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision-making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly.
As a Site Reliability Engineer at JPMorgan Chase within the AIML Data Platforms and Chief Data and Analytics Team, you will develop and deliver advanced technology products focused on data and analytics. Tackle complex cloud data platform challenges, especially around DataLake Tools. In this role you will work in an agile environment, collaborating with cross-functional teams.
Job responsibilities:
- Maintains a managed AWS Databricks platform, and provides engineering and operational support for the platform to application teams.
- Performs platform design, set-up and configuration, workspace administration, resource monitoring, providing engineering support to data engineering teams, Data Science/ML, and Application/integration teams.
- Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture.
- Drives continuous improvement in system observability, alerting, and capacity planning.
- Collaborates with engineering and data teams to optimize infrastructure and deployment processes, focusing on automation and operational excellence.
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Develops secure high-quality production code, and reviews and debugs code written by others.
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems.
- Adds to team culture of diversity, opportunity, and respect.
- Implements Site Reliability Engineering (SRE) best practices to ensure reliability, scalability, and performance of data platforms.
- Develops and maintains incident response procedures, including root cause analysis and postmortem documentation.
Required qualifications, capabilities, and skills:
- Formal training or certification on software engineering concepts and 10+ years applied experience.
- Extensive experience with AWS Databricks platform administration and engineering support is a MUST.
- Strong understanding of SRE principles, including SLIs, SLOs, error budgets, and incident management.
- Experience with monitoring tools, automation frameworks, and CI/CD pipelines.
- Proficient in Python application program development with use of automated unit testing.
- Experience with terraform development and understanding of terraform enterprise.
- Experience in delivering system design, application development, testing, and operational stability.
- Knowledge of Big Data distributed compute frameworks like Spark, Glue, MapReduce etc.
- Excellent troubleshooting, analytical, and communication skills.
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
Preferred qualifications, capabilities, and skills:
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
SRE III- Data & AWS in London employer: JPMorganChase
Contact Detail:
JPMorganChase Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land SRE III- Data & AWS in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, especially those already working at JPMorgan Chase. A friendly chat can open doors and give you insider info on what they're really looking for.
✨Tip Number 2
Show off your skills in action! If you’ve got a portfolio or GitHub with projects related to AWS, Databricks, or SRE principles, make sure to highlight that. It’s a great way to demonstrate your expertise beyond just words.
✨Tip Number 3
Prepare for the technical interview by brushing up on your problem-solving skills. Expect to tackle real-world scenarios related to data platforms and cloud challenges. Practice makes perfect, so get into some mock interviews!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who take that extra step to engage with us directly.
We think you need these skills to ace SRE III- Data & AWS in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the SRE III role. Highlight your experience with AWS Databricks and any relevant SRE principles. We want to see how your skills align with what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data and analytics, and how you can contribute to our team at StudySmarter. Keep it engaging and personal!
Showcase Your Projects: If you've worked on any cool projects related to cloud data platforms or automation, make sure to mention them. We love seeing real-world applications of your skills, so don’t hold back!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!
How to prepare for a job interview at JPMorganChase
✨Know Your AWS Databricks Inside Out
Make sure you brush up on your knowledge of the AWS Databricks platform. Be ready to discuss your experience with platform administration and engineering support, as this is a must-have for the role. Prepare specific examples of how you've tackled challenges in this area.
✨Master SRE Principles
Familiarise yourself with Site Reliability Engineering principles like SLIs, SLOs, and incident management. Be prepared to explain how you've applied these concepts in your previous roles, and think of scenarios where you improved system reliability or performance.
✨Show Off Your Coding Skills
Since proficiency in Python and Terraform is crucial, practice coding problems and be ready to demonstrate your skills. You might be asked to write code during the interview, so ensure you can showcase your ability to develop high-quality production code and automate testing.
✨Communicate Clearly and Collaboratively
As this role involves working with cross-functional teams, strong communication skills are key. Prepare to discuss how you've collaborated with engineering and data teams in the past, and be ready to share examples of how you’ve contributed to a positive team culture.