At a Glance
- Tasks: Build resilient data pipelines and transform raw data into valuable insights.
- Company: Join a dynamic team in the Financial Services sector in Manchester.
- Benefits: Permanent full-time role with competitive salary and growth opportunities.
- Other info: Collaborative environment with a focus on innovation and career development.
- Why this job: Make an impact by working on complex projects with cutting-edge technology.
- Qualifications: Degree in relevant field and experience in data engineering required.
The predicted salary is between 50000 - 65000 £ per year.
We are seeking highly skilled and experienced Azure Data Engineers to join a newly formed group concentrating on Data. Within this role you will be a key member of the team, working on a complex and challenging project within the Financial Services/Capital Markets industry. The primary focus of the role would be on building resilient, reusable Data Pipelines to extract, load, and transform raw data into a relational data model. The successful candidate will work across complex, multi-source datasets including loan servicing systems, property and valuation platforms, collections systems, and third-party data providers, delivering reliable and auditable data at scale.
Key Responsibilities
- Serve as the team’s ADF, Databricks, Python, PySpark & Spark SQL technical expert
- Responsible for day-to-day collection & ingestion of raw data into corporate data assets
- Work with the team to formalize data flows and data standards
- Enable trusted datasets for portfolio analytics, asset strategy, finance, and risk
- Supervise all data ingestion & integration processes from source to target including the data warehouse, data lake, etc
- Performance tune and optimize all data ingestion and data integration processes
- Partner with Data Stewards and Business Analysts to understand the nature of the data being handled and what an optimal Data Pipeline for it should look like
- Design solutions that are aligned to the target state Data Architecture
About you
- Degree in Computer Science, Information Systems, Data Science, or a related field is preferable
- Proven experience building resilient, reusable Data Pipelines as a Data Engineer or equivalent
- Resourceful, motivated self-starter with the ability to collaborate across business and technology
- Strong analytical, verbal, and written communication skills
- A background in financial data domains (IBOR/ABOR, transactions, market data, reference data)
- Strong experience as a Data Engineer within Real Estate, Credit, Banking, or NPL Asset Management
- Microsoft certification a plus
Data Engineer employer: Arrow
Contact Detail:
Arrow Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Network like a pro! Reach out to your connections in the industry, especially those in financial services or data engineering. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best data pipelines and projects. Use platforms like GitHub to share your work, and don’t forget to highlight your experience with Azure Data Factory, Databricks, and Python.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge. Be ready to discuss how you’ve tackled complex data challenges in the past, especially in financial contexts. Practice common interview questions related to data engineering.
✨Tip Number 4
Apply through our website! We make it easy for you to find roles that match your skills. Plus, it shows you’re genuinely interested in joining our team. Don’t miss out on the chance to be part of something great!
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV speaks directly to the Data Engineer role. Highlight your experience with Azure, Data Pipelines, and any relevant financial data domains. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our team. Keep it concise but impactful – we love a good story!
Showcase Your Technical Skills: Don’t forget to mention your expertise in ADF, Databricks, Python, and Spark SQL. We’re keen on seeing how you've used these tools in past projects, so give us some juicy examples!
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of your application and ensures you don’t miss out on any updates from us. Let’s get started!
How to prepare for a job interview at Arrow
✨Know Your Tech Stack
Make sure you’re well-versed in Azure Data Factory, Databricks, Python, and PySpark. Brush up on your Spark SQL skills too! Be ready to discuss how you've used these technologies in past projects, as this will show your practical experience.
✨Understand the Financial Services Landscape
Familiarise yourself with the financial data domains mentioned in the job description, like IBOR/ABOR and market data. Being able to speak knowledgeably about these areas will demonstrate your understanding of the industry and how your role fits into it.
✨Prepare for Scenario-Based Questions
Expect questions that ask you to solve problems or design data pipelines on the spot. Practice explaining your thought process clearly and logically, as this will showcase your analytical skills and ability to collaborate with others.
✨Showcase Your Communication Skills
Since you'll be working with Data Stewards and Business Analysts, it's crucial to demonstrate strong verbal and written communication skills. Prepare examples of how you've effectively communicated complex data concepts to non-technical stakeholders in the past.