At a Glance
- Tasks: Develop and deliver advanced tech products focused on data and analytics in a dynamic environment.
- Company: Join JPMorgan Chase's Chief Data & Analytics Office, a leader in data innovation.
- Benefits: Competitive salary, diverse culture, and opportunities for professional growth.
- Other info: Embrace a culture of diversity, opportunity, and respect while driving continuous improvement.
- Why this job: Make a real impact by tackling complex cloud data challenges with cutting-edge technology.
- Qualifications: 10+ years of experience in software engineering and AWS Databricks platform administration.
The predicted salary is between 70000 - 90000 £ per year.
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm's data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision-making. The CDAO is also responsible for developing and implementing solutions that support the firm's commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly.
As a Site Reliability Engineer at JPMorgan Chase within the AIML Data Platforms and Chief Data and Analytics Team, you will develop and deliver advanced technology products focused on data and analytics. Tackle complex cloud data platform challenges, especially around DataLake Tools. In this role you will work in an agile environment, collaborating with cross‑functional teams.
Job Responsibilities
- Maintains a managed AWS Databricks platform, and provides engineering and operational support for the platform to application teams.
- Performs platform design, set‑up and configuration, workspace administration, resource monitoring, providing engineering support to data engineering teams, Data Science/ML, and Application/integration teams.
- Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes‑oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture.
- Drives continuous improvement in system observability, alerting, and capacity planning.
- Collaborates with engineering and data teams to optimize infrastructure and deployment processes, focusing on automation and operational excellence.
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Develops secure high‑quality production code, and reviews and debugs code written by others.
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems.
- Adds to team culture of diversity, opportunity, and respect.
- Implements Site Reliability Engineering (SRE) best practices to ensure reliability, scalability, and performance of data platforms.
- Develops and maintains incident response procedures, including root cause analysis and postmortem documentation.
Required Qualifications, Capabilities, and Skills
- Formal training or certification on software engineering concepts and 10+ years applied experience.
- Extensive experience with AWS Databricks platform administration and engineering support is a MUST.
- Strong understanding of SRE principles, including SLIs, SLOs, error budgets, and incident management.
- Experience with monitoring tools, automation frameworks, and CI/CD pipelines.
- Proficient in Python application program development with use of automated unit testing.
- Experience with terraform development and understanding of terraform enterprise.
- Experience in delivering system design, application development, testing, and operational stability.
- Knowledge of Big Data distributed compute frameworks like Spark, Glue, MapReduce etc.
- Excellent troubleshooting, analytical, and communication skills.
- Experience with Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large‑scale data processing.
Preferred Qualifications, Capabilities, and Skills
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large‑scale data processing.
Equal Employment Opportunity
We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs.
SRE III- Data & AWS in London employer: J.P. Morgan
Contact Detail:
J.P. Morgan Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land SRE III- Data & AWS in London
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. A friendly chat can lead to opportunities that aren’t even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects, especially those related to AWS and data platforms. This gives potential employers a taste of what you can do.
✨Tip Number 3
Prepare for interviews by practising common SRE scenarios and technical questions. Mock interviews with friends or using online platforms can help you feel more confident when the real deal comes along.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive!
We think you need these skills to ace SRE III- Data & AWS in London
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter for the SRE III role. Highlight your experience with AWS Databricks and SRE principles, as these are key to what we're looking for at StudySmarter.
Showcase Your Skills: Don’t just list your skills; demonstrate them! Use specific examples from your past work that show how you've tackled complex cloud data platform challenges or improved operational stability.
Be Clear and Concise: Keep your application clear and to the point. We appreciate straightforward communication, so avoid jargon unless it’s relevant to the role. Make it easy for us to see why you’re a great fit!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates during the process.
How to prepare for a job interview at J.P. Morgan
✨Know Your AWS Databricks Inside Out
Make sure you brush up on your knowledge of the AWS Databricks platform. Be prepared to discuss your experience with platform administration and engineering support, as this is a must-have for the role. Highlight specific projects where you've tackled challenges using Databricks.
✨Master SRE Principles
Familiarise yourself with Site Reliability Engineering principles like SLIs, SLOs, and error budgets. Be ready to share examples of how you've implemented these concepts in past roles, especially in relation to incident management and operational stability.
✨Showcase Your Problem-Solving Skills
Prepare to discuss creative software solutions you've developed or technical problems you've solved. Use specific examples that demonstrate your ability to think outside the box, particularly in cloud data platforms and automation processes.
✨Communicate Effectively
Strong communication skills are key in this role, especially when collaborating with cross-functional teams. Practice articulating your thoughts clearly and concisely, and be ready to explain complex technical concepts in a way that's easy to understand for non-technical stakeholders.