At a Glance
- Tasks: Design and maintain cloud infrastructure, automate workflows, and support data management in biotech.
- Company: Join a pioneering biotech firm transforming disease treatment with innovative technology.
- Benefits: Enjoy 25 days holiday, flexible hybrid working, and two office shut-down periods.
- Why this job: Make a real impact on healthcare by advancing groundbreaking discoveries in a fast-growing company.
- Qualifications: Experience in DevOps, AWS, and collaboration with bioinformaticians is essential.
- Other info: Dynamic environment with opportunities for professional growth and continuous improvement.
The predicted salary is between 60000 - 80000 £ per year.
Many cancers and other diseases are caused, or resist treatment, because T cells can't recognize or target cells correctly. We’re progressing first-in-class antigen modulators through the clinic, developed to treat disease by controlling T cell activation. Our technology modulates antigen presentation, flicking a switch inside cells to alter their appearance to the immune system. This approach marks a fundamental shift in how we treat people living with autoimmune disorders, cancers and infectious diseases.
We are recruiting to join our IT team. This role will operate at the intersection of IT and business teams (including Bioinformatics), driving data management, engineering, software development and infrastructure best practices. The role will play a fundamental part in shaping Greywolf’s data strategy, including the design, delivery, and maintenance of data infrastructure to support data integration, analytics, and governance across the organisation. You will help Greywolf evolve, ready for the next phase of growth enabling the production of robust, reproducible and version-controlled bioinformatics workflows.
Core Responsibilities- Design, build and maintain cloud architecture, infrastructure and services within AWS, following best practices for security, reliability and scalability.
- Collaborate closely with bioinformaticians and computational biologists to translate scientific workflows into robust, automated CI/CD pipelines, utilising orchestration frameworks (e.g. Nextflow) where appropriate.
- Implement containerisation (e.g. Docker) to improve reproducibility traceability of bioinformatics workflows.
- Support infrastructure-as-code (e.g. CloudFormation) to ensure environments are versioned and auditable.
- Architect strategic and scalable data infrastructure, models and ingestion pipelines to support structured and unstructured scientific and business data.
- Deploy and integrate analytics and reporting platforms such as Microsoft Fabric, Power BI and Spotfire.
- Improve data provenance, lineage, and auditability to support scientific integrity and regulatory expectations.
- Act as a technical bridge between engineering and science, ensuring solutions are fit for purpose and user-friendly.
- Produce clear, precise and accurate technical documentation.
- Manage and oversee third parties responsible for AWS monitoring, maintenance and delivery where external capacity or expertise is necessary.
- Operate with a continuous improvement mindset to identified and remediate issues proactively.
- Proven experience as a DevOps Engineer, Platform Engineer, or similar role within a Biotechnology, Life Sciences or Pharma organisation.
- Demonstrated expertise with AWS, CloudFormation and Docker.
- Proficiency with hands-on administration of Linux-based systems.
- Experience building and maintaining CI/CD pipelines.
- Experience documenting master data schemas and with master data management.
- Experience designing and developing data pipelines and ETL procedures.
- Ability to work collaboratively with stakeholders, particularly bioinformaticians and scientists.
If you are passionate about data and DevOps and want to work in a fast growing and exciting company, we invite you to apply and contribute to our mission of advancing ground-breaking discoveries.
What Sets You Apart- Strategic experience in data governance, lineage and reproducibility in regulated or research-driven environments.
- Exposure to Microsoft Fabric, Azure data services, or modern BI platforms.
- Experience implementing AI services such as Amazon Bedrock.
- DevOps information security experience and certifications.
- Experience integrating systems using API’s.
- 2 holiday office shut-down periods during July and December, in addition to 25 days annual holiday.
- Flexible, hybrid working (you should be able to attend our office in Milton Park, Oxfordshire 1-2 times per week and travel to partner sites, board and other meetings, as required).
Data and DevOps Engineer (Biotech) employer: Greywolf Therapeutics
Contact Detail:
Greywolf Therapeutics Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data and DevOps Engineer (Biotech)
✨Tip Number 1
Network like a pro! Reach out to folks in the biotech and DevOps space on LinkedIn or at industry events. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving AWS, Docker, or CI/CD pipelines. This gives potential employers a taste of what you can do.
✨Tip Number 3
Tailor your approach! When you find a role that excites you, customise your pitch to highlight how your experience aligns with their needs. Make it personal and relevant!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are genuinely interested in joining us.
We think you need these skills to ace Data and DevOps Engineer (Biotech)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the Data and DevOps Engineer role. Highlight your experience with AWS, Docker, and CI/CD pipelines, as these are key to what we’re looking for.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you’re passionate about data and DevOps in the biotech field. Share specific examples of how you've contributed to similar projects or roles in the past.
Showcase Your Technical Skills: Don’t just list your technical skills; demonstrate them! Include any relevant projects or achievements that showcase your expertise in cloud architecture, data pipelines, and bioinformatics workflows.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates from our team!
How to prepare for a job interview at Greywolf Therapeutics
✨Know Your Tech Inside Out
Make sure you’re well-versed in AWS, CloudFormation, and Docker. Brush up on your knowledge of CI/CD pipelines and Linux systems. Being able to discuss your hands-on experience with these technologies will show that you’re not just familiar but truly capable.
✨Understand the Science
Since this role sits at the intersection of IT and bioinformatics, it’s crucial to have a grasp of the scientific workflows involved. Familiarise yourself with how T cells work and the basics of antigen presentation. This will help you communicate effectively with bioinformaticians and scientists during the interview.
✨Showcase Your Collaboration Skills
Prepare examples of how you’ve worked with cross-functional teams in the past. Highlight any experiences where you acted as a bridge between technical and non-technical stakeholders. This will demonstrate your ability to ensure solutions are user-friendly and fit for purpose.
✨Be Ready to Discuss Continuous Improvement
Think of instances where you identified issues and proactively implemented solutions. This mindset is key for the role, so be prepared to share specific examples of how you’ve improved processes or infrastructure in previous positions.