At a Glance
- Tasks: Lead the design and maintenance of scalable data pipelines using cutting-edge cloud technologies.
- Company: Join a forward-thinking company focused on delivering world-class data solutions.
- Benefits: Enjoy flexible work options, competitive salary, and opportunities for professional growth.
- Why this job: Be part of a dynamic team that values innovation and collaboration in data engineering.
- Qualifications: Senior experience in Data Engineering with expertise in AWS, Databricks, and programming languages like Python and SQL.
- Other info: Opportunity to mentor a team and influence data strategy across the organisation.
The predicted salary is between 43200 - 72000 £ per year.
For this role, senior experience of Data Engineering and building automated data pipelines on IBM Datastage & DB2, AWS, and Databricks from source to operational databases through to curation layer is expected using the latest cloud modern technologies. Experience in delivering complex pipelines will be significantly valuable to how D&G maintain and deliver world-class data pipelines.
Knowledge in the following areas is essential:
- Databricks: Expertise in managing and scaling Databricks environments for ETL, data science, and analytics use cases.
- AWS Cloud: Extensive experience with AWS services such as S3, Glue, Lambda, RDS, and IAM.
- IBM Skills: DB2, Datastage, Tivoli Workload Scheduler, Urban Code.
- Programming Languages: Proficiency in Python, SQL.
- Data Warehousing & ETL: Experience with modern ETL frameworks and data warehousing techniques.
- DevOps & CI/CD: Familiarity with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog).
- Big Data Technologies: Familiarity with big data technologies like Apache Spark, Hadoop, or similar.
- ETL/ELT Tools: Creating common data sets across on-prem (IBM DataStage ETL) and cloud data stores.
- Leadership & Strategy: Lead Data Engineering team(s) in designing, developing, and maintaining highly scalable and performant data infrastructures.
- Customer Data Platform Development: Architect and manage our data platforms using IBM (legacy platform) & Databricks on AWS technologies (e.g., S3, Lambda, Glacier, Glue, EventBridge, RDS) to support real-time and batch data processing needs.
- Data Governance & Best Practices: Implement best practices for data governance, security, and data quality across our data platform. Ensure data is well-documented, accessible, and meets compliance standards.
- Pipeline Automation & Optimisation: Drive the automation of data pipelines and workflows to improve efficiency and reliability.
- Team Management: Mentor and grow a team of data engineers, ensuring alignment with business goals, delivery timelines, and technical standards.
- Cross Company Collaboration: Work closely with all levels of business stakeholders including data scientists, finance analysts, MI, and cross-functional teams to ensure seamless data access and integration with various tools and systems.
- Cloud Management: Lead efforts to integrate and scale cloud data services on AWS, optimizing costs and ensuring the resilience of the platform.
- Performance Monitoring: Establish monitoring and alerting solutions to ensure the high performance and availability of data pipelines and systems, preventing impact on downstream consumers.
Big Data Lead employer: FalconSmartIT
Contact Detail:
FalconSmartIT Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Big Data Lead
✨Tip Number 1
Familiarise yourself with the specific tools and technologies mentioned in the job description, such as IBM DataStage, DB2, AWS services, and Databricks. Having hands-on experience or relevant projects to discuss can set you apart during interviews.
✨Tip Number 2
Showcase your leadership skills by preparing examples of how you've successfully led data engineering teams or projects in the past. Highlighting your ability to mentor others and align technical work with business goals will resonate well with us.
✨Tip Number 3
Network with current or former employees of StudySmarter on platforms like LinkedIn. Engaging with them can provide insights into our company culture and expectations, which can be invaluable during your interview.
✨Tip Number 4
Prepare to discuss your experience with pipeline automation and optimisation. Be ready to share specific examples of how you've improved efficiency and reliability in previous roles, as this is a key focus for us in this position.
We think you need these skills to ace Big Data Lead
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your senior experience in Data Engineering, particularly with IBM Datastage, DB2, AWS, and Databricks. Use specific examples of projects where you've built automated data pipelines and mention any relevant technologies you've worked with.
Craft a Compelling Cover Letter: In your cover letter, emphasise your leadership experience and how you've successfully managed teams in designing and maintaining scalable data infrastructures. Mention your familiarity with DevOps practices and how you can contribute to the company's goals.
Showcase Technical Skills: Clearly list your technical skills related to the job description, such as proficiency in Python and SQL, and experience with ETL frameworks. Highlight any big data technologies like Apache Spark or Hadoop that you have worked with.
Demonstrate Problem-Solving Abilities: Provide examples in your application of how you've driven automation and optimisation of data pipelines. Discuss any challenges you've faced in previous roles and how you overcame them, showcasing your analytical and strategic thinking.
How to prepare for a job interview at FalconSmartIT
✨Showcase Your Technical Expertise
Be prepared to discuss your experience with IBM Datastage, DB2, AWS, and Databricks in detail. Highlight specific projects where you built automated data pipelines and the challenges you overcame. This will demonstrate your hands-on knowledge and problem-solving skills.
✨Demonstrate Leadership Skills
As a Big Data Lead, you'll need to show that you can lead a team effectively. Share examples of how you've mentored junior engineers or led projects. Discuss your approach to aligning team goals with business objectives and ensuring timely delivery.
✨Understand Data Governance and Best Practices
Familiarise yourself with data governance principles and be ready to discuss how you've implemented best practices in previous roles. Emphasise your commitment to data quality, security, and compliance, as these are crucial for maintaining a world-class data platform.
✨Prepare for Cross-Functional Collaboration
Expect questions about how you work with various stakeholders, including data scientists and finance analysts. Prepare examples of successful collaborations and how you ensured seamless data access and integration across different teams.