At a Glance
- Tasks: Design and build scalable data pipelines and infrastructure for analytics.
- Company: Dynamic tech company in London with a hybrid work model.
- Benefits: Competitive salary, flexible working, and opportunities for professional growth.
- Why this job: Join a team that powers business insights through high-quality data.
- Qualifications: 5+ years as a Data Engineer with strong Python and SQL skills.
- Other info: Collaborative environment with a focus on continuous improvement and innovation.
The predicted salary is between 36000 - 60000 £ per year.
As a Data Engineer, you'll design, build, and operate scalable, reliable data pipelines and data infrastructure. Your work will ensure high-quality data is accessible, trusted, and ready for analytics and data science - powering business insights and decision-making across the company.
What you'll do:
- Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations.
- Develop and evolve scalable data architecture to meet business and performance requirements.
- Partner with analysts and data scientists to deliver curated, analysis-ready datasets and enable self-service analytics.
- Implement best practices for data quality, testing, monitoring, lineage, and reliability.
- Optimise workflows for performance, cost, and scalability (e.g., tuning Spark jobs, query optimisation, partitioning strategies).
- Ensure secure data handling and compliance with relevant data protection standards and internal policies.
- Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes.
What makes you a great fit:
- 5+ years of experience as a Data Engineer, building and maintaining production-grade pipelines and datasets.
- Strong Python and SQL skills with a solid understanding of data structures, performance, and optimisation strategies.
- Familiarity with GCP and ecosystem knowledge: BigQuery, Composer, Dataproc, Cloud Run, Dataplex.
- Hands-on experience with orchestration (like Airflow, Dagster, Databricks Workflows) and distributed processing in a cloud environment.
- Experience with analytical data modelling (star and snowflake schemas), DWH, ETL/ELT patterns, and dimensional concepts.
- Experience with data governance concepts: access control, retention, data classification, auditability, and compliance standards.
- Familiarity with CI/CD for data pipelines, IaC (Terraform), and/or DataOps practices.
- Experience building observability for data systems (metrics, alerting, data quality checks, incident response).
Data Engineer (GCP) employer: Xcede
Contact Detail:
Xcede Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (GCP)
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with GCP. A friendly chat can lead to insider info about job openings or even referrals.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best data pipelines and projects. Use platforms like GitHub to share your code and demonstrate your expertise in Python, SQL, and GCP tools.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering scenarios. Be ready to discuss how you've optimised workflows or ensured data quality in past projects. Practice makes perfect!
✨Tip Number 4
Don’t forget to apply through our website! We love seeing applications directly from candidates who are passionate about data engineering. It shows initiative and helps us get to know you better.
We think you need these skills to ace Data Engineer (GCP)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with GCP, Python, and SQL, and don’t forget to mention any relevant projects or achievements that showcase your skills in building data pipelines.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your background aligns with our needs. Be specific about your experience with data architecture and analytics.
Showcase Your Technical Skills: We want to see your technical prowess! Include examples of your work with tools like BigQuery, Airflow, and Terraform. If you've optimised workflows or implemented data governance practices, make sure to mention those too!
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it gives you a chance to explore more about what we do at StudySmarter.
How to prepare for a job interview at Xcede
✨Know Your Data Tools
Make sure you brush up on your knowledge of GCP and its ecosystem, especially BigQuery, Composer, and Dataproc. Be ready to discuss how you've used these tools in past projects, as this will show your hands-on experience and familiarity with the technologies they'll expect you to work with.
✨Showcase Your Pipeline Skills
Prepare to talk about specific data pipelines you've built or maintained. Highlight your experience with ingestion, transformation, and export processes, and be ready to explain the challenges you faced and how you optimised workflows for performance and cost.
✨Understand Data Governance
Familiarise yourself with data governance concepts like access control and compliance standards. Be prepared to discuss how you've implemented best practices for data quality and security in your previous roles, as this is crucial for ensuring trusted data handling.
✨Demonstrate Collaboration
Since you'll be partnering with analysts and data scientists, think of examples where you've successfully collaborated with cross-functional teams. Discuss how you delivered analysis-ready datasets and enabled self-service analytics, showcasing your ability to communicate effectively and work well with others.