At a Glance
- Tasks: Design and optimise data pipelines using PySpark for innovative financial solutions.
- Company: Leading global financial data organisation with a focus on transformation.
- Benefits: Competitive market rate, potential contract extension, and hands-on experience.
- Why this job: Join a cutting-edge team driving innovation in data engineering.
- Qualifications: Strong PySpark skills and experience with data pipelines required.
- Other info: Office-based role in London with opportunities for professional growth.
The predicted salary is between 48000 - 72000 £ per year.
Location: London (Office-based). Contract: 6 months (potential extension). Start: ASAP. Rate: Market rate - Inside IR35.
We are looking for experienced PySpark + Fabric Developers to join a major transformation programme with a leading global financial data and infrastructure organisation. This is an exciting opportunity to work on cutting-edge data engineering solutions, driving innovation and performance at scale.
Responsibilities
- Design, build, and optimise data pipelines for both batch and streaming workloads.
- Develop and manage dataflows and semantic models to support analytics and reporting.
- Implement complex data transformations, aggregations, and joins with a focus on performance and reliability.
- Apply robust data validation, cleansing, and profiling techniques to maintain accuracy.
- Enforce role-based access, data masking, and compliance standards.
- Tune and optimise workloads to minimise latency and enhance throughput.
- Collaborate with analysts and stakeholders to translate business needs into technical solutions.
- Maintain clear documentation and contribute to internal best practices.
Requirements
- Strong hands-on experience with PySpark (RDDs, DataFrames, Spark SQL).
- Proven ability to build and optimise ETL pipelines and dataflows.
- Familiar with Microsoft Fabric or similar lakehouse/data platform environments.
- Experience with Git, CI/CD pipelines, and automated deployment.
- Knowledge of market data, transactional systems, or financial datasets.
- Excellent communication skills and collaborative mindset.
Desirable
- Experience with Azure Data Lake, OneLake, or distributed computing environments.
- Understanding of data security and compliance (e.g., GDPR, SOX).
- Exposure to preparing datasets for Power BI.
Pyspark Developer employer: Queen Square Recruitment Ltd
Contact Detail:
Queen Square Recruitment Ltd Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Pyspark Developer
✨Tip Number 1
Network like a pro! Reach out to your connections in the industry, attend meetups, and join online forums. You never know who might have the inside scoop on a PySpark Developer role or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best projects, especially those involving PySpark and data engineering. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common technical questions related to PySpark and data pipelines. Practice explaining your thought process clearly, as communication is key in collaborative environments.
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities that might be perfect for you. Plus, it’s a great way to ensure your application gets seen by the right people.
We think you need these skills to ace Pyspark Developer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with PySpark and data engineering. We want to see how your skills match the job description, so don’t be shy about showcasing your relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about this role and how your background makes you a perfect fit. We love seeing genuine enthusiasm for the position.
Showcase Your Technical Skills: When detailing your experience, focus on your hands-on work with PySpark, ETL pipelines, and any relevant tools like Microsoft Fabric. We’re looking for specifics that demonstrate your expertise!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity right away!
How to prepare for a job interview at Queen Square Recruitment Ltd
✨Know Your PySpark Inside Out
Make sure you brush up on your PySpark skills before the interview. Be ready to discuss RDDs, DataFrames, and Spark SQL in detail. Practise explaining how you've built and optimised ETL pipelines in past projects, as this will show your hands-on experience.
✨Familiarise Yourself with Microsoft Fabric
Since the role involves working with Microsoft Fabric or similar platforms, do some research on its features and functionalities. Be prepared to discuss how you've used it or similar lakehouse environments in your previous roles, and think of examples where you've implemented data transformations.
✨Showcase Your Collaboration Skills
This position requires a collaborative mindset, so be ready to share examples of how you've worked with analysts and stakeholders. Highlight any experiences where you translated business needs into technical solutions, as this will demonstrate your ability to communicate effectively.
✨Prepare for Technical Questions
Expect technical questions related to data validation, cleansing, and profiling techniques. Brush up on your knowledge of data security and compliance standards like GDPR and SOX, as these are crucial in the financial sector. Being well-prepared will help you stand out as a knowledgeable candidate.