At a Glance
- Tasks: Join us to build and optimise data processing frameworks using PySpark and cloud technologies.
- Company: Talan Data x AI, a top consultancy known for innovation and a great workplace culture.
- Benefits: Enjoy 25 days holiday, private medical insurance, and a unique reward programme for sabbaticals.
- Why this job: Make an impact in data engineering while working with cutting-edge tech and industry experts.
- Qualifications: Experience with PySpark, cloud platforms, and strong programming skills in Python and SQL.
- Other info: Dynamic team environment with excellent training, development, and career growth opportunities.
The predicted salary is between 28800 - 43200 £ per year.
Talan Data x AI is a leading Data Management and Analytics consultancy, working closely with leading software vendors and top industry experts across a range of sectors, unlocking value and insight from their data. At Talan Data x AI, innovation is at the heart of our client offerings, and we help companies to further improve their efficiency with modern processes and technologies, such as Machine Learning (ML) and Artificial Intelligence (AI).
Our consultants are at the heart of everything we do, and we have been recertified as a 2025 Great Place to Work. This achievement not only highlights Talan Data x AI’s positive organisational culture but also strengthens its reputation as an employer of choice within the industry. We invest heavily in the training and development of our teams and hold regular socials in each region to encourage engagement and network building.
Job Description
Your skills and attributes for success:
- An excellent team player and able to work independently.
- Excellent client facing skills with experience on client projects.
- A self-starter who is proactive in nature.
- Excellent verbal, written communication, and presentational skills.
- Ability to build internal and external relationships.
- Effective negotiating and influencing skills.
- Ability to think creatively and propose innovative solutions.
To qualify for this role, you must have:
- Proven experience and knowledge with PySpark and Apache Spark including the fundamentals of how it works.
- Experience with cloud platforms (e.g., AWS, Azure, GCP) and data lake architectures.
- Strong experience and programming skills in languages such as Python and SQL and the ability to write complex SQL queries.
- Use of GitHub and CI/CD practices.
- Support development of the Azure Databricks Lakehouse platform, shaping frameworks and solutions that other engineering teams will adopt in future data projects.
- Build, optimise, and maintain data processing frameworks using Python, ensuring performance, scalability, and maintainability.
- Support DBT integration and best practices for transformation pipelines within Databricks.
- Apply software engineering principles including:
- Full development lifecycle management
- Source control, automated testing, CI/CD
- Design patterns and reusable solutions
- Coding standards and patterns
- Collaborate with technical solution authorities, ensuring alignment with governance, design decisions, and platform standards.
- Collaborate closely with the Cloud Architecture and Data Architecture teams to deliver approved solutions.
- Take ownership of requirements, communicate effectively across teams, and deliver high quality solutions.
- Experience of DevOps and infrastructure deployments (Azure and Databricks).
- A proactive awareness of industry standards, regulations, and developments.
- Multi-skilled experience in one or more of the following disciplines: Data Management, Data Engineering, Data Warehousing, Data Modelling, Data Quality, Data Integration, Data Analytics, Data Visualisation, Data Science and Business Intelligence.
- Project experience using one or more of the following technologies: Tableau, Power BI, Cloud, Azure, AWS, GCP, Snowflake) and their integration with Databricks is advantageous
You must be:
- Willing to work on client sites, potentially for extended periods.
- Willing to travel for work purposes and be happy to stay away from home for extended periods.
- Eligible to work in the UK without restriction.
#TalanUK#LI-HB1
Qualifications
Additional Information
What we offer:
- BDP Plus – A reward programme whereby you accrue points to trade against a 3-month paid sabbatical or cash equivalent.
- 25 days holiday + bank holidays.
- 5 days holiday buy/sell option.
- Private medical insurance.
- Life cover.
- Cycle to work scheme.
- Eligibility for company pension scheme (5% employer contribution, salary sacrifice option).
- Employee assistance programme.
- Bespoke online learning via Udemy for Business.
#J-18808-Ljbffr
Data Engineering Consultant - Pyspark employer: Talan Group
Contact Detail:
Talan Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineering Consultant - Pyspark
✨Tip Number 1
Network like a pro! Get out there and connect with industry folks on LinkedIn or at local meetups. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving PySpark and cloud platforms. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by practising common questions related to data engineering and your technical skills. We recommend doing mock interviews with friends or using online platforms to boost your confidence.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our awesome team at Talan Data x AI.
We think you need these skills to ace Data Engineering Consultant - Pyspark
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineering Consultant role. Highlight your experience with PySpark, cloud platforms, and any relevant projects you've worked on. We want to see how your skills align with what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our innovative team at Talan Data x AI. Keep it concise but impactful!
Showcase Your Projects: If you've got any projects that demonstrate your skills in Python, SQL, or data processing frameworks, make sure to mention them. We love seeing real-world applications of your expertise, so don’t hold back!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, it shows us you're keen on joining our awesome team!
How to prepare for a job interview at Talan Group
✨Know Your PySpark Inside Out
Make sure you brush up on your PySpark and Apache Spark knowledge. Be ready to discuss how they work, their fundamentals, and any projects you've worked on using these technologies. This will show that you're not just familiar with the tools but can also apply them effectively.
✨Showcase Your Cloud Experience
Since experience with cloud platforms like AWS, Azure, or GCP is crucial, prepare examples of how you've used these in past projects. Highlight your understanding of data lake architectures and how you've optimised data processing frameworks in the cloud.
✨Communicate Clearly and Confidently
Excellent communication skills are key for this role. Practice explaining complex technical concepts in simple terms, as you'll need to convey ideas to clients and team members alike. Consider doing mock interviews to refine your presentation skills.
✨Demonstrate Your Problem-Solving Skills
Be prepared to think creatively during the interview. Talan Data x AI values innovative solutions, so come equipped with examples of challenges you've faced in previous roles and how you approached solving them. This will showcase your proactive nature and ability to think outside the box.