At a Glance
- Tasks: Design and build scalable data architecture on GCP, optimising data pipelines and ensuring data quality.
- Company: Join Kitman Labs, a leading innovator in sports performance analytics, recognised by Fast Company.
- Benefits: Enjoy flexible work options, a collaborative culture, and the chance to impact top athletes' performance.
- Why this job: Be at the forefront of data engineering, tackling complex challenges in a dynamic, high-impact environment.
- Qualifications: Proven experience in data engineering, expertise in GCP, and strong SQL skills are essential.
- Other info: Opportunity for mentorship and technical leadership within a passionate team of industry experts.
The predicted salary is between 43200 - 72000 £ per year.
Kitman Labs is a global human performance company, disrupting and transforming the way the sports industry uses data to increase the performance of the world\’s top athletes.
Driven by a passion to innovate in the areas of sports performance, analytics and user experience, we have assembled a team of the industry’s top data scientists, sports performance scientists, product specialists and engineers. The company received recognition by Fast Company in 2019 as one of the most innovative companies in the world.
Kitman Labs’ advanced Outcome-driven Analytics and Performance Intelligence Platform are used by over 700 teams in 50 leagues on 6 continents spanning soccer, rugby, American football, baseball and ice hockey.
The Role
We are seeking an experienced and highly skilled Senior Data Engineer to play a pivotal role in the evolution of our analytics platform. This mission-critical project involves augmenting our in-house platform with cutting-edge data engineering technologies on Google Cloud Platform (GCP) to achieve new levels of scale and performance, complemented by Looker for best-in-class visualization and analysis.
This role will be central to this transformation, working within the team to architect and build the data foundation for our next generation of analytics. This position is ideal for an engineer who thrives on complex data challenges, including designing robust data models, implementing near real-time data replication using Change Data Capture (CDC), and building highly performant and scalable data transformation pipelines to handle complex business calculations across large datasets (over 300 million data points per customer).
As a senior team member, you will drive data architecture and best practices, ensuring our new platform is performant, reliable, and capable of delivering the dynamic, insightful reporting our clients depend on.
What you\’ll be responsible for
-
Driving Data Architecture: Design and build a scalable, end-to-end data architecture on GCP. This includes creating robust and efficient data models in our data warehouse, defining data flows, and ensuring the infrastructure is optimised for high-volume, near real-time data processing.
-
Building & Optimising Data Pipelines: Develop, deploy, and manage resilient data pipelines for large-scale data ingestion and transformation. You will be hands-on with GCP DataStream to implement CDC and orchestrate complex SQL-based transformation workflows with Dataform.
-
Solving Complex Data Challenges: Tackle and resolve complex performance bottlenecks across the entire data stack. This involves optimising intricate calculations, tuning database performance, and ensuring the efficiency of our data models to support low-latency queries from Looker.
-
Upholding Data Quality & Integrity: Champion and implement best practices for data quality, testing, and governance. You will establish robust data validation checks and build out CI/CD pipelines for all data processes to ensure the accuracy and reliability of our reporting.
-
Technical Leadership & Mentoring: Provide technical guidance and mentorship to other engineers on data engineering best practices. You will lead technical decisions, evaluate trade-offs, and foster a culture of data excellence within the squad.
-
Stakeholder Collaboration: Work in close partnership with product managers and front-end engineers to deeply understand user requirements and translate them into effective data solutions that power our embedded analytics features.
Experience and skills we look for
-
Proven Experience in Data Engineering: A strong track record of designing, building, and optimising data-intensive systems and large-scale ETL/ELT pipelines.
-
Expertise in the Modern Data Stack: Deep, hands-on experience with cloud-based data platforms, with a strong preference for Google Cloud Platform (GCP). AWS knowledge a plus, but not essential.
-
Specialised GCP Skillset: Demonstrable, practical experience using GCP Datastream (or similar technology) for Change Data Capture (CDC) and Dataform (or similar tools) for developing and managing data transformations. Proficiency with BigQuery is essential.
-
Strong Data Modeling Skills: Extensive experience designing and implementing data models (e.g., dimensional modeling, data vault) optimised for analytical workloads and BI tools.
-
Advanced SQL & Programming: Expertise in advanced SQL for complex data manipulation and analysis, coupled with proficiency in a programming language like Python for automation and scripting.
-
Performance Tuning & Optimisation: A proven ability to diagnose and resolve performance issues within data pipelines and databases. You understand query optimisation, indexing, and partitioning strategies.
Additional Skills that set you apart
-
BI & Data Visualisation: Experience working with modern business intelligence tools, with specific experience using or building solutions for Looker.
-
Complex Calculations: Experience in environments that require translating complex business logic or financial calculations into accurate and performant SQL.
-
Secure Cloud Environments: Experience working with data services in highly secure or compliant environments is a plus.
-
CI/CD for Data: A solid understanding of CI/CD principles and tools (e.g., Git, Jenkins, GitLab CI) applied to data pipelines and infrastructure-as-code (Terraform familiarity a plus).
#J-18808-Ljbffr
Senior Data Engineer (Reporting & Analytics) employer: Houston Texans
Contact Detail:
Houston Texans Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer (Reporting & Analytics)
✨Tip Number 1
Familiarise yourself with Google Cloud Platform (GCP) and its data services, especially GCP Datastream and BigQuery. Having hands-on experience with these tools will not only boost your confidence but also demonstrate your capability to handle the specific technologies used at Kitman Labs.
✨Tip Number 2
Engage with the data engineering community online, particularly those focused on sports analytics. Networking with professionals in this niche can provide insights into industry trends and challenges, which you can leverage during interviews to showcase your knowledge and enthusiasm for the role.
✨Tip Number 3
Prepare to discuss complex data challenges you've faced in previous roles. Be ready to explain how you optimised data pipelines or resolved performance issues, as this will highlight your problem-solving skills and technical expertise, which are crucial for the Senior Data Engineer position.
✨Tip Number 4
Research Kitman Labs' current projects and their impact on sports performance analytics. Understanding their mission and recent innovations will allow you to tailor your discussions and show genuine interest in contributing to their goals, making you a more appealing candidate.
We think you need these skills to ace Senior Data Engineer (Reporting & Analytics)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in data engineering, particularly with Google Cloud Platform and data pipeline optimisation. Use specific examples that demonstrate your skills in building scalable data architectures and solving complex data challenges.
Craft a Compelling Cover Letter: In your cover letter, express your passion for sports performance analytics and how your background aligns with Kitman Labs' mission. Mention specific projects where you've successfully implemented Change Data Capture or worked with Looker to showcase your expertise.
Showcase Technical Skills: Clearly outline your technical skills related to the modern data stack, especially your proficiency in GCP, SQL, and data modelling. Provide concrete examples of how you've used these skills to drive data architecture and improve data quality in previous roles.
Highlight Collaboration Experience: Emphasise your experience working with cross-functional teams, such as product managers and front-end engineers. Describe how you translated user requirements into effective data solutions, showcasing your ability to collaborate and communicate effectively.
How to prepare for a job interview at Houston Texans
✨Showcase Your Data Engineering Expertise
Be prepared to discuss your previous experience in designing and optimising data-intensive systems. Highlight specific projects where you built large-scale ETL/ELT pipelines, especially using Google Cloud Platform (GCP). This will demonstrate your hands-on expertise and familiarity with the tools they use.
✨Demonstrate Problem-Solving Skills
Expect to face questions about complex data challenges you've encountered. Prepare examples that showcase your ability to diagnose performance bottlenecks and implement effective solutions. Discuss how you optimised calculations and improved query performance in past roles.
✨Familiarise Yourself with Their Tech Stack
Research Kitman Labs' use of GCP Datastream and Looker. Understanding these tools will allow you to speak confidently about how you can contribute to their analytics platform. If you have experience with similar technologies, be sure to mention it.
✨Prepare for Technical Leadership Questions
As a senior role, you'll likely be asked about your experience mentoring others and leading technical decisions. Think of examples where you've guided teams in best practices for data engineering and how you've fostered a culture of excellence in your previous positions.