At a Glance
- Tasks: Join us as a Data Engineer to enhance our Data Lakehouse platform using cutting-edge tech.
- Company: Be part of RBC, a leading bank committed to innovation and community impact.
- Benefits: Enjoy a dynamic work environment, coaching support, and opportunities to make a real difference.
- Why this job: Work with top professionals in a collaborative team focused on progressive thinking and growth.
- Qualifications: Two years of data management experience and strong SQL skills are essential.
- Other info: This role is based in London/Newcastle with flexible working options.
The predicted salary is between 36000 - 60000 £ per year.
Social network you want to login/join with:
We have an exciting opportunity for a Data Engineer to join the team in our London/Newcastle offices.
The successful candidate will work closely with business and technology teams across Wealth Management Europe (WME) to support the ongoing maintenance and evolution of the Data Lakehouse platform, focusing on the ingestion and modelling of new data, and the evolution of the platform itself utilising new technologies to improve performance and accuracy of the data.
What will you do?
Responsible for the development and ongoing maintenance of the Data Lakehouse platform infrastructure using the Microsoft Azure technology stack, including Databricks and Data Factory.
Manage data pipelines consisting of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines must be created, maintained, and optimized as workloads move from development to production for specific use cases.
Good understanding of SQL and PySpark to create new and modify existing Notebooks, Functions and Workflows to support efficient reporting and analytics to the business.
Create, maintain, and develop Dev, UAT and Production environments ensuring consistency.
Responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity.
Competent in using GitHub (or other version control tooling) and in using data and schema comparisons via Visual Studio.
Champion for the DevOps process to ensure the latest techniques are being used and that implementation methodologies involving new or changes to existing source code or data structures follow the agreed development and release processes and that all productionised code is adequately documented and reviewed.
Identify, source, stage, and model internal process improvements to automate manual processes and optimise data delivery for greater scalability, as part of the end-to-end data lifecycle.
Actively engage within the team and wider business areas to foster relationships and develop thought leadership.
Follow the established Agile working methodology and collaborate effectively in sprints, meetings, and standups.
Be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. Additionally, be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements.
What do you need to succeed?
Must-have
At least two years or more of work experience in data management disciplines including data integration, modelling, optimisation and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.
At least two years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative.
Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management
Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies
Strong experience with popular database programming languages for relational databases (SQL, T-SQL)
Knowledge of working with SQL on Hadoop tools and technologies including HIVE, Azure Synapse Analytics (SQL Data Warehouse) and others from an open-source perspective and Azure Data Factory (ADF), Databricks, and others from a commercial vendor perspective.
Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
Basic experience in working with data governance/data quality and data security teams and specifically information stewards and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification. Ability to build quick prototypes and to translate prototypes into data products and services in a diverse ecosystem.
Nice-to-have
Knowledge of Terraform
Experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, Java, C++, Scala, R, and others.
What is in it for you?
We thrive on the challenge to be our best – progressive thinking to keep growing and working together to deliver trusted advice to help our clients thrive and communities prosper. We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual.
Leaders who support your development through coaching and managing opportunities.
Opportunities to work with the best in the field.
Ability to make a difference and lasting impact.
Work in a dynamic, collaborative, progressive, and high-performing team.
Agency Notice
RBC Group does not accept agency resumés. Please do not forward resumés to our employees, nor any other company location. RBC Group only pay fees to agencies where they have entered into a prior agreement to do so and in any event do not pay fees related to unsolicited resumés. Please contact the Recruitment function for additional details.
#LI-SS2
Job Skills
Big Data Management, Cloud Computing, Database Development, Data Mining, Data Warehousing (DW), ETL Processing, Group Problem Solving, Quality Management, Requirements Analysis
Job Summary
Job Description
What is the opportunity?
We have an exciting opportunity for a Data Engineer to join the team in our London/Newcastle offices.
The successful candidate will work closely with business and technology teams across Wealth Management Europe (WME) to support the ongoing maintenance and evolution of the Data Lakehouse platform, focusing on the ingestion and modelling of new data, and the evolution of the platform itself utilising new technologies to improve performance and accuracy of the data.
What will you do?
-
Responsible for the development and ongoing maintenance of the Data Lakehouse platform infrastructure using the Microsoft Azure technology stack, including Databricks and Data Factory.
-
Manage data pipelines consisting of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines must be created, maintained, and optimized as workloads move from development to production for specific use cases.
-
Good understanding of SQL and PySpark to create new and modify existing Notebooks, Functions and Workflows to support efficient reporting and analytics to the business.
-
Create, maintain, and develop Dev, UAT and Production environments ensuring consistency.
-
Responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity.
-
Competent in using GitHub (or other version control tooling) and in using data and schema comparisons via Visual Studio.
-
Champion for the DevOps process to ensure the latest techniques are being used and that implementation methodologies involving new or changes to existing source code or data structures follow the agreed development and release processes and that all productionised code is adequately documented and reviewed.
-
Identify, source, stage, and model internal process improvements to automate manual processes and optimise data delivery for greater scalability, as part of the end-to-end data lifecycle.
-
Actively engage within the team and wider business areas to foster relationships and develop thought leadership.
-
Follow the established Agile working methodology and collaborate effectively in sprints, meetings, and standups.
-
Be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. Additionally, be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements.
What do you need to succeed?
Must-have
-
At least two years or more of work experience in data management disciplines including data integration, modelling, optimisation and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.
-
At least two years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative.
-
Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management
-
Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies
-
Strong experience with popular database programming languages for relational databases (SQL, T-SQL)
-
Knowledge of working with SQL on Hadoop tools and technologies including HIVE, Azure Synapse Analytics (SQL Data Warehouse) and others from an open-source perspective and Azure Data Factory (ADF), Databricks, and others from a commercial vendor perspective.
-
Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
-
Basic experience in working with data governance/data quality and data security teams and specifically information stewards and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification. Ability to build quick prototypes and to translate prototypes into data products and services in a diverse ecosystem.
Nice-to-have
-
Knowledge of Terraform
-
Experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, Java, C++, Scala, R, and others.
What is in it for you?
We thrive on the challenge to be our best – progressive thinking to keep growing and working together to deliver trusted advice to help our clients thrive and communities prosper. We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual.
-
Leaders who support your development through coaching and managing opportunities.
-
Opportunities to work with the best in the field.
-
Ability to make a difference and lasting impact.
-
Work in a dynamic, collaborative, progressive, and high-performing team.
Agency Notice
RBC Group does not accept agency resumés. Please do not forward resumés to our employees, nor any other company location. RBC Group only pay fees to agencies where they have entered into a prior agreement to do so and in any event do not pay fees related to unsolicited resumés. Please contact the Recruitment function for additional details.
#LI-SS2
Job Skills
Big Data Management, Cloud Computing, Database Development, Data Mining, Data Warehousing (DW), ETL Processing, Group Problem Solving, Quality Management, Requirements Analysis
Additional Job Details
Address:
12 SMITHFIELD STREET:LONDON
City:
London
Country:
United Kingdom
Work hours/week:
35
Employment Type:
Full time
Platform:
WEALTH MANAGEMENT
Job Type:
Regular
Pay Type:
Salaried
Posted Date:
2025-07-01
Application Deadline:
2025-07-16
Note: Applications will be accepted until 11:59 PM on the day prior to the application deadline date above
Inclusion and Equal Opportunity Employment
At RBC, we believe an inclusive workplace that has diverse perspectives is core to our continued growth as one of the largest and most successful banks in the world. Maintaining a workplace where our employees feel supported to perform at their best, effectively collaborate, drive innovation, and grow professionally helps to bring our Purpose to life and create value for our clients and communities. RBC strives to deliver this through policies and programs intended to foster a workplace based on respect, belonging and opportunity for all.
Join our Talent Community
Stay in-the-know about great career opportunities at RBC. Sign up and get customized info on our latest jobs, career tips and Recruitment events that matter to you.
Expand your limits and create a new future together at RBC. Find out how we use our passion and drive to enhance the well-being of our clients and communities at jobs.rbc.com.
#J-18808-Ljbffr
Data Engineer employer: Royal Bank of Canada>
Contact Detail:
Royal Bank of Canada> Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Familiarise yourself with the Microsoft Azure technology stack, especially Databricks and Data Factory. Having hands-on experience or projects showcasing your skills in these tools can set you apart from other candidates.
✨Tip Number 2
Engage with the data engineering community on platforms like LinkedIn or GitHub. Sharing your insights or contributing to open-source projects can help you build a network and demonstrate your expertise in data management architectures.
✨Tip Number 3
Showcase your understanding of Agile methodologies and DevOps principles by discussing relevant experiences in team settings. Highlighting your ability to collaborate effectively in sprints and meetings will resonate well with potential employers.
✨Tip Number 4
Stay updated on the latest trends in data engineering and analytics. Being able to discuss new data initiatives and innovative techniques during interviews will demonstrate your curiosity and commitment to continuous learning.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in data management, integration, and optimisation. Use keywords from the job description, such as 'Data Lakehouse', 'Azure', and 'SQL', to demonstrate your fit for the role.
Craft a Compelling Cover Letter: In your cover letter, explain why you're passionate about data engineering and how your skills align with the company's needs. Mention specific projects or experiences that showcase your ability to manage data pipelines and work with cross-functional teams.
Showcase Technical Skills: Clearly outline your technical skills related to the Microsoft Azure technology stack, SQL, and PySpark. Provide examples of how you've used these technologies in past roles to improve data processes or automate tasks.
Highlight Collaboration Experience: Since the role involves working closely with business stakeholders, emphasise your experience in cross-functional teams. Share examples of successful collaborations and how they contributed to data management initiatives.
How to prepare for a job interview at Royal Bank of Canada>
✨Showcase Your Technical Skills
Be prepared to discuss your experience with SQL, PySpark, and the Microsoft Azure technology stack. Bring examples of data pipelines you've built or optimised, and be ready to explain the challenges you faced and how you overcame them.
✨Demonstrate Your Collaboration Experience
Since the role involves working closely with cross-functional teams, share specific instances where you've successfully collaborated with business stakeholders. Highlight how you communicated technical concepts to non-technical team members.
✨Familiarise Yourself with Agile Methodologies
Understand the principles of Agile and be ready to discuss how you've applied these methodologies in past projects. Mention any experience you have with sprints, standups, and how you adapt to changing requirements.
✨Prepare for Problem-Solving Questions
Expect questions that assess your problem-solving abilities, especially related to data quality and governance. Think of scenarios where you identified issues in data processes and how you proposed solutions to improve efficiency.