At a Glance
- Tasks: Join our Data Engineering team to design and maintain scalable data pipelines.
- Company: Frasers Group is revolutionising retail with top sports and luxury brands globally.
- Benefits: Enjoy perks like gym memberships, recognition schemes, and potential bonuses for performance.
- Why this job: Be part of a fearless culture that encourages innovation and personal growth in data engineering.
- Qualifications: Experience with Databricks, Spark, and Azure services is essential; strong problem-solving skills are a must.
- Other info: Opportunity to connect with leadership and gain insights through unique company events.
The predicted salary is between 36000 - 60000 £ per year.
At Frasers Group we’re rethinking retail. Through digital innovation and unique store experiences, we’re serving our consumers with the world’s best sports, premium and luxury brands globally. As a leader in the industry, we’re elevating the retail experience for our consumers through our collection of established brands, including Sports Direct, FLANNELS, USC, Frasers, and GAME.
Our vision - we are building the worlds most admired and compelling brand ecosystem. Our purpose – we are elevating the lives of the many with access to the world’s best brands and experiences. At Frasers Group, we fear less and do more. Our people are forward thinkers who are driven to operate outside of their comfort zone to change the future of retail, embracing challenges along the way.
The potential to elevate your career is massive, the experience unrivalled. To be able to make the most of it you need to live and breathe our principles:
- Think without limits - Think fast, think fearlessly, and take the team with you.
- Own it and back yourself - Own the basics, own your role and own the results.
- Be relevant - Relevant to our people, our partners and the planet.
We are currently looking for a Mid-Senior Data Engineer to join us in our growing Data Engineering team; to help develop, maintain, support, and integrate our growing number of data systems. You will be instrumental in designing, building, and maintaining robust and scalable data pipelines that power our operational subscribers and analytical platforms. You will work with a diverse range of data sources, integrating both real-time streams and micro-batches, connecting with varied end points to move data at speed and at scale.
The right candidate will have a wealth of knowledge in the data world with a strong focus on Databricks, and will be keen to expand upon their existing knowledge, learning new technologies along the way as well as supporting both future and legacy technologies and processes.
You will be coding, testing, and documenting new or modified data systems; creating scalable, repeatable, secure pipelines and applications for both operational data and analytics, both internally and externally to the business. You will grow our capabilities, solving new data problems and challenges every day.
Key Responsibilities:
- Design, Build, and Optimise Real-Time Data Pipelines: Develop and maintain robust and scalable stream and micro-batch data pipelines using Databricks, Spark (PySpark/SQL), and Delta Live Tables.
- Implement Change Data Capture (CDC): Implement efficient CDC mechanisms to capture and process data changes from various source systems in near real-time.
- Master the Delta Lake: Leverage the full capabilities of Delta Lake, including ACID transactions, time travel, and schema evolution, to ensure data quality and reliability.
- Champion Data Governance with Unity Catalog: Implement and manage data governance policies, data lineage, and fine-grained access control using Databricks Unity Catalog.
- Enable Secure Data Sharing with Delta Sharing: Design and implement secure and governed data sharing solutions to distribute data to both internal and external consumers without data replication.
- Integrate with Web Services and APIs: Develop and manage integrations to push operational data to key external services, as well as internal APIs.
- Azure Data Ecosystem: Work extensively with core Azure data services, including Azure Data Lake Storage (ADLS) Gen2, Azure Functions, Azure Event Hubs, and CI/CD.
- Data Modelling and Warehousing: Apply strong data modelling principles to design and implement logical and physical data models for our analytical and operational data stores.
- Monitoring and Performance Tuning: Proactively monitor data pipeline performance, identify bottlenecks, and implement optimizations to ensure low latency and high throughput.
- Collaboration and Mentorship: Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, and mentor junior data engineers.
Qualifications What We’re Looking For:
- Proven Databricks Expertise: Extensive hands-on experience with the Databricks Lakehouse Platform is essential.
- Strong Spark and Python/SQL Skills: Proficiency in Spark programming (PySpark and/or Scala) and expert-level SQL skills.
- Real-Time Data Processing: Demonstrable experience in building and managing stream and micro-batch processing pipelines using technologies like Spark Structured Streaming or Delta Live Tables.
- Deep Understanding of Delta Lake Concepts: Thorough knowledge of Delta Lake architecture and features (ACID transactions, time travel, optimization techniques).
- Experience with Databricks Advanced Features: Practical experience with Change Data Capture (CDC), Unity Catalog for data governance, and Delta Sharing for secure data collaboration.
- Web Service and API Integration: A proven track record of integrating data pipelines with external web services and REST APIs.
- Solid Azure Experience: Strong experience with core Azure data services (ADLS Gen2, Event Hubs, Azure Functions).
- Data Modelling and Warehousing Fundamentals: A strong understanding of data modelling concepts (e.g., Kimball, Inmon) and experience with data warehousing principles.
- CI/CD and DevOps Mindset: Experience with CI/CD pipelines for data engineering workloads using tools like Git.
- Excellent Problem-Solving and Communication Skills: The ability to troubleshoot complex data issues and communicate technical concepts effectively to both technical and non-technical audiences.
Desirable:
- GCP and BigQuery Knowledge: Utilize your experience with GCP and BigQuery for analytical data workloads and to ensure seamless interoperability within our multi-cloud strategy.
Additional Information:
Along with your benefits package we also offer a wide range of perks for our colleagues:
- Reward, Recognition and Opportunities: Frasers Champion - Our employees are at the heart of our business and we ensure individuals are recognised every single month for their hard work.
- Frasers Festival - an event like no other! Our Frasers Festival is our celebration for Head Office and Retail Staff across the UK and Europe.
- Employee Welfare: Frasers Fit - Our Everlast Gyms Team are on a mission to make our workforce the best, and fittest on the planet!
What’s next? Our Recruitment Team will be reviewing applications and all candidates will receive a response, whether you are successful or unsuccessful.
Data Engineer - Operational Data employer: Frasers Group
Contact Detail:
Frasers Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer - Operational Data
✨Tip Number 1
Familiarise yourself with Databricks and its features, especially Delta Lake. Understanding how to leverage ACID transactions and schema evolution will set you apart as a candidate who can hit the ground running.
✨Tip Number 2
Showcase your experience with real-time data processing. Be prepared to discuss specific projects where you've built or managed stream and micro-batch processing pipelines, as this is crucial for the role.
✨Tip Number 3
Highlight your collaboration skills. The role involves working with cross-functional teams, so be ready to share examples of how you've successfully collaborated with software engineers, data scientists, or product managers in the past.
✨Tip Number 4
Prepare to discuss your problem-solving approach. The ability to troubleshoot complex data issues is essential, so think of specific challenges you've faced and how you resolved them, demonstrating your analytical skills.
We think you need these skills to ace Data Engineer - Operational Data
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Databricks, Spark, and Azure data services. Use specific examples that demonstrate your skills in building data pipelines and working with real-time data processing.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention how your values align with Frasers Group's principles, such as thinking without limits and owning your role.
Showcase Relevant Projects: If you have worked on projects involving Delta Lake, Change Data Capture, or API integrations, be sure to include these in your application. Detail your contributions and the impact of your work.
Prepare for Technical Questions: Anticipate technical questions related to data engineering concepts, particularly around Databricks and Azure. Brush up on your knowledge of data modelling and performance tuning to impress during interviews.
How to prepare for a job interview at Frasers Group
✨Showcase Your Databricks Expertise
Make sure to highlight your hands-on experience with the Databricks Lakehouse Platform. Be prepared to discuss specific projects where you've implemented Databricks features, especially focusing on Delta Lake and its capabilities.
✨Demonstrate Real-Time Data Processing Skills
Prepare examples of how you've built and managed stream and micro-batch processing pipelines. Discuss the technologies you've used, such as Spark Structured Streaming or Delta Live Tables, and be ready to explain the challenges you faced and how you overcame them.
✨Understand Data Governance Principles
Familiarise yourself with data governance policies and practices, particularly in relation to Databricks Unity Catalog. Be ready to discuss how you've implemented data lineage and access control in previous roles.
✨Communicate Effectively
Since you'll be collaborating with cross-functional teams, practice explaining complex technical concepts in simple terms. Prepare to share how you've communicated effectively with both technical and non-technical audiences in past experiences.