At a Glance
- Tasks: Develop and deliver advanced tech products focused on data and analytics in a dynamic environment.
- Company: Join JPMorgan Chase's Chief Data & Analytics Office, a leader in data innovation.
- Benefits: Competitive salary, career growth, and opportunities to work with cutting-edge technologies.
- Other info: Collaborative culture that values diversity, opportunity, and respect.
- Why this job: Make a real impact by tackling complex cloud data challenges and driving innovation.
- Qualifications: 10+ years of experience in software engineering and AWS Databricks platform administration.
The predicted salary is between 70000 - 90000 £ per year.
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision-making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly.
As a Site Reliability Engineer at JPMorgan Chase within the AIML Data Platforms and Chief Data and Analytics Team, you will develop and deliver advanced technology products focused on data and analytics. Tackle complex cloud data platform challenges, especially around DataLake Tools. In this role you will work in an agile environment, collaborating with cross-functional teams.
Job responsibilities:
- Maintains a managed AWS Databricks platform, and provides engineering and operational support for the platform to application teams.
- Performs platform design, set-up and configuration, workspace administration, resource monitoring, providing engineering support to data engineering teams, Data Science/ML, and Application/integration teams.
- Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture.
- Drives continuous improvement in system observability, alerting, and capacity planning.
- Collaborates with engineering and data teams to optimize infrastructure and deployment processes, focusing on automation and operational excellence.
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Develops secure high-quality production code, and reviews and debugs code written by others.
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems.
- Adds to team culture of diversity, opportunity, and respect.
- Implements Site Reliability Engineering (SRE) best practices to ensure reliability, scalability, and performance of data platforms.
- Develops and maintains incident response procedures, including root cause analysis and postmortem documentation.
Required qualifications, capabilities, and skills:
- Formal training or certification on software engineering concepts and 10+ years applied experience.
- Extensive experience with AWS Databricks platform administration and engineering support is a MUST.
- Strong understanding of SRE principles, including SLIs, SLOs, error budgets, and incident management.
- Experience with monitoring tools, automation frameworks, and CI/CD pipelines.
- Proficient in Python application program development with use of automated unit testing.
- Experience with terraform development and understanding of terraform enterprise.
- Experience in delivering system design, application development, testing, and operational stability.
- Knowledge of Big Data distributed compute frameworks like Spark, Glue, MapReduce etc.
- Excellent troubleshooting, analytical, and communication skills.
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
Preferred qualifications, capabilities, and skills:
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
SRE III- Data & AWS employer: Jpmorgan Chase & Co.
Contact Detail:
Jpmorgan Chase & Co. Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land SRE III- Data & AWS
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repo showcasing your projects, especially those related to AWS and data platforms. It’s a great way to demonstrate what you can do.
✨Tip Number 3
Prepare for interviews by practising common SRE scenarios and technical questions. Mock interviews with friends or using online platforms can help you nail it!
✨Tip Number 4
Don’t forget to apply through our website! We’re always looking for talented individuals like you, and applying directly can give you an edge.
We think you need these skills to ace SRE III- Data & AWS
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the job description. Highlight your AWS Databricks experience and any SRE principles you've applied in past roles. We want to see how you can bring value to our team!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to tell us why you're passionate about data and analytics, and how your background aligns with our mission at StudySmarter. Keep it engaging and personal – we love a good story!
Showcase Your Projects: If you've worked on relevant projects, don't hold back! Include links or descriptions of your work with data pipelines, automation frameworks, or any creative software solutions you've developed. This gives us a glimpse into your problem-solving skills.
Apply Through Our Website: We encourage you to apply directly through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any updates. Plus, it’s super easy!
How to prepare for a job interview at Jpmorgan Chase & Co.
✨Know Your AWS Databricks Inside Out
Make sure you brush up on your knowledge of the AWS Databricks platform. Be prepared to discuss your experience with platform administration and engineering support, as this is a must-have for the role. Think about specific challenges you've faced and how you overcame them.
✨Master SRE Principles
Familiarise yourself with Site Reliability Engineering principles like SLIs, SLOs, and error budgets. Be ready to share examples of how you've implemented these concepts in past roles, especially in relation to incident management and operational stability.
✨Show Off Your Coding Skills
Since proficiency in Python and Terraform is crucial, be prepared to discuss your coding experience. Bring examples of your work, particularly any automated unit testing you've done. If possible, practice explaining your code and thought process clearly.
✨Emphasise Collaboration and Communication
This role involves working with cross-functional teams, so highlight your teamwork skills. Prepare to discuss how you've collaborated with data engineering, data science, and application teams in the past, and how you’ve contributed to a positive team culture.