At a Glance
- Tasks: Design and build data pipelines that drive business insights and AI solutions.
- Company: Join JPMorgan Chase's innovative Security Services team.
- Benefits: Competitive salary, career growth, and a collaborative work environment.
- Why this job: Make a real impact by enabling data-driven decisions in a leading financial institution.
- Qualifications: Experience with Databricks, Python, and data pipeline development required.
- Other info: Dynamic role with opportunities to work on cutting-edge AI projects.
The predicted salary is between 36000 - 60000 £ per year.
Be part of a team that creates the strategic data assets driving business insight, operational excellence, and the next generation of AI solutions. Your work will directly enable the business to answer key questions, track progress on objectives, and unlock new opportunities through data.
As a Data Engineer in the Security Services Data Modelling and Engineering team, within AI Transformation, you will play a pivotal role in building the data foundation that powers business insights, OKR tracking, and AI enablement across JPMorgan Chase's Security Services businesses. You will design and develop scalable data pipelines and reusable datasets on Databricks, collaborating with Data Architects and Business Analysts to deliver high‐quality, compliant, and business‐driven solutions.
Job responsibilities:
- Design, build, and optimize data pipelines and transformation workflows on Databricks, leveraging Python and Spark.
- Collaborate with Data Architects and Business Analysts to develop robust data models and clearly document data flows and ETL logic.
- Implement and execute data quality checks and validation modules using Python.
- Maintain transparency and accountability by tracking work and progress in Jira.
- Ensure datasets and pipelines are accurately registered in relevant catalogues and consoles, meeting governance and privacy standards.
Required qualifications, capabilities, and skills:
- Proven experience developing data pipelines and solutions on Databricks.
- Strong proficiency in Python, including libraries for data transformation (e.g., pandas).
- Solid understanding of ETL concepts, data modelling, and pipeline design.
- Experience with Spark and cloud data platforms.
- Ability to document data flows and transformation logic to a high standard.
- Familiarity with project management tools such as Jira.
- Collaborative mindset and strong communication skills.
Preferred qualifications, capabilities, and skills:
- Experience in financial services or large enterprise data environments.
- Knowledge of data governance, privacy, and compliance requirements.
- Exposure to business analysis and requirements gathering.
Security Services Data Modelling and Engineering Senior Associate in London employer: J.P. Morgan
Contact Detail:
J.P. Morgan Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Security Services Data Modelling and Engineering Senior Associate in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, especially those already working at JPMorganChase. A friendly chat can open doors and give you insider info on what they're really looking for.
✨Tip Number 2
Show off your skills! If you've got a portfolio or GitHub with projects showcasing your data pipelines and Python prowess, make sure to share it. It’s a great way to demonstrate your capabilities beyond just words.
✨Tip Number 3
Prepare for the interview by brushing up on your ETL concepts and data modelling knowledge. Be ready to discuss how you've tackled challenges in past projects, especially using Databricks and Spark.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're serious about joining the team!
We think you need these skills to ace Security Services Data Modelling and Engineering Senior Associate in London
Some tips for your application 🫡
Show Off Your Skills: Make sure to highlight your experience with data pipelines and Databricks in your application. We want to see how you've used Python and Spark to create scalable solutions, so don’t hold back!
Be Clear and Concise: When documenting your past projects or experiences, keep it straightforward. We appreciate clarity, especially when it comes to data flows and ETL logic. Use bullet points if it helps!
Tailor Your Application: Take a moment to customise your application for this role. Mention specific skills from the job description, like your familiarity with Jira or your understanding of data governance. It shows us you’re genuinely interested!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands and makes a great first impression!
How to prepare for a job interview at J.P. Morgan
✨Know Your Data Tools
Make sure you brush up on your knowledge of Databricks, Python, and Spark. Be ready to discuss how you've used these tools in past projects, as well as any challenges you faced and how you overcame them.
✨Showcase Your Collaboration Skills
Since this role involves working closely with Data Architects and Business Analysts, be prepared to share examples of successful teamwork. Highlight how you’ve communicated complex data concepts to non-technical stakeholders.
✨Understand ETL and Data Governance
Familiarise yourself with ETL processes and data governance standards. Be ready to explain how you ensure data quality and compliance in your work, as this will be crucial for the role.
✨Prepare Questions About the Role
Think of insightful questions to ask during the interview. This shows your genuine interest in the position and helps you understand how you can contribute to the team’s goals and objectives.