At a Glance
- Tasks: Design and build scalable integration pipelines for Graph-Powered Data Platforms.
- Company: Harvey Nash, a leader in tech recruitment with a focus on innovation.
- Benefits: Competitive daily rate, mostly remote work, and opportunities for travel to London.
- Why this job: Join a dynamic team and tackle exciting challenges in graph data engineering.
- Qualifications: Expertise in graph databases, Docker, Kubernetes, and Terraform required.
- Other info: Flexible contract length with potential for career advancement.
The predicted salary is between 51000 - 85000 Β£ per year.
Harvey Nash is seeking a Senior Graph Data Engineer to architect and maintain robust, scalable integration pipelines that populate Graph-Powered Data Platforms.
Inside of IR35. Daily rate of around Β£425. 3 - 6 months in length. Mostly remote with some travel to London.
What you will be doing:
- Build Scalable Pipelines: Design and implement high-performance integration pipelines for both bulk-loading and real-time streaming (using Kafka or Kinesis) into graph databases like Neo4j.
- Pipeline Performance Tuning.
- Container Orchestration.
- Infrastructure Automation.
What skills and experience do I need:
- Graph Ingestion Expertise: Proven experience building and scaling integration layers specifically for Graph Databases. Understanding of graph-specific scaling challenges like index management and transaction overhead.
- Expert-level knowledge of Docker and Kubernetes, including experience with Helm charts, Operators, and persistent storage management for stateful workloads.
- Infrastructure as Code: Proficiency in Terraform for automating complex, multi-environment cloud infrastructure.
- Hands-on experience with Prometheus and Grafana to monitor pipeline health and database metrics such as heap usage and query latency.
Please submit your CV for consideration.
Graph Data Engineer employer: Harvey Nash Group
Contact Detail:
Harvey Nash Group Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Graph Data Engineer
β¨Tip Number 1
Network like a pro! Reach out to your connections in the industry, especially those who work with graph databases. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects related to graph data engineering. Whether it's a GitHub repo or a personal website, let your work speak for itself and impress potential employers.
β¨Tip Number 3
Prepare for interviews by brushing up on common graph database challenges. Be ready to discuss your experience with scaling integration layers and performance tuning. We want you to shine when it comes to technical questions!
β¨Tip Number 4
Don't forget to apply through our website! Itβs the best way to ensure your application gets seen. Plus, we love seeing candidates who take the initiative to connect directly with us.
We think you need these skills to ace Graph Data Engineer
Some tips for your application π«‘
Tailor Your CV: Make sure your CV highlights your experience with graph databases and integration pipelines. We want to see how your skills match the job description, so donβt be shy about showcasing your expertise in Docker, Kubernetes, and Terraform!
Showcase Relevant Projects: Include specific projects where you've built scalable pipelines or tackled graph-specific challenges. We love seeing real-world examples of your work, especially if youβve used tools like Kafka or Kinesis!
Keep It Clear and Concise: When writing your application, clarity is key! Use straightforward language and bullet points to make it easy for us to see your qualifications at a glance. Remember, weβre looking for someone who can communicate effectively.
Apply Through Our Website: We encourage you to submit your application through our website. Itβs the best way for us to keep track of your application and ensure it gets the attention it deserves. Plus, itβs super easy!
How to prepare for a job interview at Harvey Nash Group
β¨Know Your Graph Databases
Make sure you brush up on your knowledge of graph databases, especially Neo4j. Be ready to discuss specific challenges you've faced with index management and transaction overhead, as these are key areas for the role.
β¨Showcase Your Pipeline Skills
Prepare examples of scalable integration pipelines you've built, particularly using Kafka or Kinesis. Highlight any performance tuning you've done and be ready to explain your thought process behind those decisions.
β¨Demonstrate Infrastructure Automation Expertise
Familiarise yourself with Terraform and how you've used it to automate cloud infrastructure. Be prepared to discuss your experience with Docker and Kubernetes, including any Helm charts or Operators you've worked with.
β¨Be Ready for Technical Questions
Expect technical questions that test your understanding of container orchestration and monitoring tools like Prometheus and Grafana. Think of scenarios where you've monitored pipeline health and how you addressed any issues that arose.