At a Glance
- Tasks: Architect and maintain scalable integration pipelines for Graph-Powered Data Platforms.
- Company: Harvey Nash, a leader in tech recruitment with a focus on innovation.
- Benefits: Competitive daily rate, mostly remote work, and opportunities for travel.
- Why this job: Join a dynamic team and tackle exciting challenges in graph data engineering.
- Qualifications: Expertise in graph ingestion, Docker, Kubernetes, and Terraform required.
- Other info: 6-month contract with excellent potential for career advancement.
The predicted salary is between 51000 - 85000 Β£ per year.
Harvey Nash is seeking a Senior Graph Data Engineer to architect and maintain robust, scalable integration pipelines that populate Graph-Powered Data Platforms.
Inside of IR35. Daily rate of around Β£4253 β 6 months in length. Mostly remote with some travel to London.
What you will be doing:
- Build Scalable Pipelines.
- Design and implement high-performance integration pipelines for both bulk-loading and real-time streaming (using Kafka or Kinesis) into graph databases like Neo4j.
- Pipeline Performance Tuning.
- Container Orchestration.
- Infrastructure Automation.
What skills and experience do I need:
- Graph Ingestion Expertise.
- Proven experience building and scaling integration layers specifically for Graph Databases.
- Understanding of graph-specific scaling challenges like index management and transaction overhead.
- Expert-level knowledge of Docker and Kubernetes, including experience with Helm charts, Operators, and persistent storage management for stateful workloads.
- Infrastructure as Code.
- Proficiency in Terraform for automating complex, multi-environment cloud infrastructure.
- Hands-on experience with Prometheus and Grafana to monitor pipeline health and database metrics such as heap usage and query latency.
Please submit your CV for consideration.
Senior Graph Data Engineer employer: Harvey Nash
Contact Detail:
Harvey Nash Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Senior Graph Data Engineer
β¨Tip Number 1
Network like a pro! Reach out to your connections in the industry, attend meetups, and engage in online forums. You never know who might have the inside scoop on job openings or can refer you directly.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving graph databases and integration pipelines. This will give potential employers a taste of what you can do beyond just your CV.
β¨Tip Number 3
Prepare for interviews by brushing up on common questions related to graph data engineering. Practice explaining your past experiences with Docker, Kubernetes, and Terraform, as these are key skills for the role.
β¨Tip Number 4
Donβt forget to apply through our website! Itβs the best way to ensure your application gets seen. Plus, we love seeing candidates who take that extra step to connect with us directly.
We think you need these skills to ace Senior Graph Data Engineer
Some tips for your application π«‘
Tailor Your CV: Make sure your CV highlights your experience with graph databases and integration pipelines. We want to see how your skills align with the role, so donβt be shy about showcasing your expertise in Docker, Kubernetes, and Terraform!
Showcase Relevant Projects: Include specific projects where you've built scalable pipelines or tackled graph-specific challenges. We love seeing real-world examples of your work, especially if youβve used tools like Kafka or Kinesis.
Keep It Clear and Concise: Your application should be easy to read and straight to the point. We appreciate clarity, so avoid jargon unless itβs necessary to describe your skills. Remember, weβre looking for your best self!
Apply Through Our Website: We encourage you to submit your application through our website. Itβs the best way for us to keep track of your application and ensure it gets the attention it deserves. Plus, itβs super easy!
How to prepare for a job interview at Harvey Nash
β¨Know Your Graph Databases
Make sure you brush up on your knowledge of graph databases, especially Neo4j. Be ready to discuss specific challenges you've faced with index management and transaction overhead, as these are key areas for the role.
β¨Showcase Your Pipeline Skills
Prepare examples of scalable integration pipelines you've built, particularly using Kafka or Kinesis. Highlight any performance tuning you've done and be ready to explain your thought process behind those decisions.
β¨Demonstrate Infrastructure Automation Expertise
Familiarise yourself with Terraform and how you've used it to automate cloud infrastructure. Be prepared to discuss your experience with container orchestration tools like Docker and Kubernetes, including any Helm charts you've worked with.
β¨Be Ready for Technical Questions
Expect in-depth technical questions about monitoring tools like Prometheus and Grafana. Have examples ready that showcase how you've monitored pipeline health and optimised database metrics in past projects.