At a Glance
- Tasks: Develop and deliver advanced tech products focused on data and analytics in a dynamic environment.
- Company: Join JPMorgan Chase's Chief Data & Analytics Office, a leader in data innovation.
- Benefits: Competitive salary, career growth, and opportunities to work with cutting-edge technologies.
- Other info: Collaborative culture that values diversity, opportunity, and respect.
- Why this job: Make a real impact by tackling complex cloud data challenges and driving innovation.
- Qualifications: 10+ years of experience in software engineering and AWS Databricks platform administration.
The predicted salary is between 70000 - 90000 £ per year.
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision-making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly.
As a Site Reliability Engineer at JPMorgan Chase within the AIML Data Platforms and Chief Data and Analytics Team, you will develop and deliver advanced technology products focused on data and analytics. Tackle complex cloud data platform challenges, especially around DataLake Tools. In this role you will work in an agile environment, collaborating with cross-functional teams.
Job responsibilities:
- Maintains a managed AWS Databricks platform, and provides engineering and operational support for the platform to application teams.
- Performs platform design, set-up and configuration, workspace administration, resource monitoring, providing engineering support to data engineering teams, Data Science/ML, and Application/integration teams.
- Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture.
- Drives continuous improvement in system observability, alerting, and capacity planning.
- Collaborates with engineering and data teams to optimize infrastructure and deployment processes, focusing on automation and operational excellence.
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Develops secure high-quality production code, and reviews and debugs code written by others.
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems.
- Adds to team culture of diversity, opportunity, and respect.
- Implements Site Reliability Engineering (SRE) best practices to ensure reliability, scalability, and performance of data platforms.
- Develops and maintains incident response procedures, including root cause analysis and postmortem documentation.
Required qualifications, capabilities, and skills:
- Formal training or certification on software engineering concepts and 10+ years applied experience.
- Extensive experience with AWS Databricks platform administration and engineering support is a MUST.
- Strong understanding of SRE principles, including SLIs, SLOs, error budgets, and incident management.
- Experience with monitoring tools, automation frameworks, and CI/CD pipelines.
- Proficient in Python application program development with use of automated unit testing.
- Experience with terraform development and understanding of terraform enterprise.
- Experience in delivering system design, application development, testing, and operational stability.
- Knowledge of Big Data distributed compute frameworks like Spark, Glue, MapReduce etc.
- Excellent troubleshooting, analytical, and communication skills.
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
Preferred qualifications, capabilities, and skills:
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
SRE III- Data & AWS in London employer: Jpmorgan Chase & Co.
Contact Detail:
Jpmorgan Chase & Co. Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land SRE III- Data & AWS in London
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. A friendly chat can lead to opportunities that aren’t even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repo showcasing your projects, especially those related to AWS and data platforms. This gives potential employers a taste of what you can do.
✨Tip Number 3
Prepare for interviews by practising common SRE scenarios and technical questions. We recommend doing mock interviews with friends or using online platforms to get comfortable.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive!
We think you need these skills to ace SRE III- Data & AWS in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the job description. Highlight your AWS Databricks experience and any SRE principles you've applied in past roles. We want to see how you can bring value to our team!
Craft a Compelling Cover Letter: Your cover letter is your chance to tell us why you're the perfect fit for this role. Share specific examples of your work with data platforms and how you've tackled complex challenges. Let your personality shine through!
Showcase Your Technical Skills: Since this role involves a lot of technical expertise, make sure to list relevant tools and technologies you’ve worked with, like Python, Terraform, and monitoring tools. We love seeing candidates who are hands-on and ready to dive into the tech!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy – just follow the prompts and submit your materials!
How to prepare for a job interview at Jpmorgan Chase & Co.
✨Know Your AWS Databricks Inside Out
Make sure you brush up on your knowledge of the AWS Databricks platform. Be prepared to discuss your experience with platform administration and engineering support, as this is a must-have for the role. Think about specific challenges you've faced and how you overcame them.
✨Master SRE Principles
Familiarise yourself with Site Reliability Engineering principles like SLIs, SLOs, and error budgets. Be ready to explain how you've applied these concepts in previous roles, especially in relation to incident management and operational stability.
✨Showcase Your Problem-Solving Skills
Prepare to discuss examples where you've tackled complex technical problems or implemented creative software solutions. Highlight your ability to think outside the box and how you've contributed to continuous improvement in system observability and capacity planning.
✨Communicate Effectively
Strong communication skills are key in this role, especially when collaborating with cross-functional teams. Practice articulating your thoughts clearly and concisely, and be ready to discuss how you've fostered a culture of diversity and respect in your previous workplaces.