This role sits within the Data and AI practice of a leading global IT solutions and managed services provider. The business works with complex enterprise customers and has built a strong reputation for delivering modern data platforms at scale. The team is growing fast and this hire plays a key role in shaping and supporting large scale data lakehouse and analytics environments.
Why This Role Stands Out
You will be joining a genuinely high performing Data and AI practice rather than a side project bolted onto infrastructure. The work is enterprise grade, technically interesting and varied. You will get exposure to modern data platforms, containerised environments and real world scale challenges, not proof of concepts that never go anywhere. Plenty of room to influence design decisions and be the grown up in the room.
Key Responsibilities
- Deploy, configure and manage Starburst Enterprise and Galaxy along with Dell Data Lakehouse platforms
- Integrate Starburst with multiple data sources to create unified analytics platforms
- Optimise containerised environments for performance and scalability using Kubernetes or OpenShift
- Configure data catalogues, security controls and compliance frameworks
- Troubleshoot complex issues, carry out root cause analysis and lead incident resolution
- Automate server administration and monitoring using tools such as Ansible
- Plan and execute disaster recovery testing and produce clear documentation and training materials
Ideal Experience
- Strong hands on experience administering Trino or Starburst in production environments
- Solid understanding of distributed systems, scalability and high availability
- Experience working with Hadoop, Hive, Spark and at least one major cloud platform such as Azure, AWS or GCP
- Comfortable in Linux or Unix environments with strong container orchestration knowledge
- Good understanding of authentication and security including LDAP, Active Directory, OAuth2 and Kerberos
- Degree in Computer Science, Data Engineering or equivalent real world experience
Nice to Have
- Python or Java development experience
- Dell Data Lakehouse exposure
- Understanding of AI and ML data requirements and federated data architectures
- Experience with automation tooling such as Ansible or Terraform
If you like complex data problems, grown up engineering conversations and working on platforms that actually matter, this one is worth a look.
#J-18808-Ljbffr
Contact Detail:
Cloud People Recruiting Team