At a Glance
- Tasks: Design and maintain cloud infrastructure, automate workflows, and support data management.
- Company: Join a pioneering biotech firm transforming disease treatment with innovative technology.
- Benefits: Enjoy flexible working, 25 days holiday, and two office shut-downs each year.
- Why this job: Make a real impact in healthcare by advancing groundbreaking discoveries in a fast-growing company.
- Qualifications: Experience in DevOps, AWS, and data management within biotech or life sciences.
- Other info: Collaborate with scientists and bioinformaticians in a dynamic, growth-focused environment.
The predicted salary is between 60000 - 80000 £ per year.
Many cancers and other diseases are caused, or resist treatment, because T cells can't recognize or target cells correctly. We’re progressing first-in-class antigen modulators through the clinic, developed to treat disease by controlling T cell activation. Our technology modulates antigen presentation, flicking a switch inside cells to alter their appearance to the immune system. This approach marks a fundamental shift in how we treat people living with autoimmune disorders, cancers and infectious diseases.
We are recruiting to join our IT team. This role will operate at the intersection of IT and business teams (including Bioinformatics), driving data management, engineering, software development and infrastructure best practices. The role will play a fundamental part in shaping Greywolf’s data strategy, including the design, delivery, and maintenance of data infrastructure to support data integration, analytics, and governance across the organisation. You will help Greywolf evolve, ready for the next phase of growth enabling the production of robust, reproducible and version-controlled bioinformatics workflows.
Core Responsibilities
- Design, build and maintain cloud architecture, infrastructure and services within AWS, following best practices for security, reliability and scalability.
- Collaborate closely with bioinformaticians and computational biologists to translate scientific workflows into robust, automated CI/CD pipelines, utilising orchestration frameworks (e.g. Nextflow) where appropriate.
- Implement containerisation (e.g. Docker) to improve reproducibility traceability of bioinformatics workflows.
- Support infrastructure-as-code (e.g. CloudFormation) to ensure environments are versioned and auditable.
- Architect strategic and scalable data infrastructure, models and ingestion pipelines to support structured and unstructured scientific and business data.
- Deploy and integrate analytics and reporting platforms such as Microsoft Fabric, Power BI and Spotfire.
- Improve data provenance, lineage, and auditability to support scientific integrity and regulatory expectations.
- Act as a technical bridge between engineering and science, ensuring solutions are fit for purpose and user-friendly.
- Produce clear, precise and accurate technical documentation.
- Manage and oversee third parties responsible for AWS monitoring, maintenance and delivery where external capacity or expertise is necessary.
- Operate with a continuous improvement mindset to identify and remediate issues proactively.
Skills, Knowledge, Qualifications and Experience
- Proven experience as a DevOps Engineer, Platform Engineer, or similar role within a Biotechnology, Life Sciences or Pharma organisation.
- Demonstrated expertise with AWS, CloudFormation and Docker.
- Proficiency with hands-on administration of Linux-based systems.
- Experience building and maintaining CI/CD pipelines.
- Experience documenting master data schemas and with master data management.
- Experience designing and developing data pipelines and ETL procedures.
- Ability to work collaboratively with stakeholders, particularly bioinformaticians and scientists.
If you are passionate about data and DevOps and want to work in a fast growing and exciting company, we invite you to apply and contribute to our mission of advancing ground-breaking discoveries.
What Sets You Apart
- Strategic experience in data governance, lineage and reproducibility in regulated or research-driven environments.
- Exposure to Microsoft Fabric, Azure data services, or modern BI platforms.
- Experience implementing AI services such as Amazon Bedrock.
- DevOps information security experience and certifications.
- Experience integrating systems using APIs.
Perks and Benefits
- 2 holiday office shut-down periods during July and December, in addition to 25 days annual holiday.
- Flexible, hybrid working (you should be able to attend our office in Milton Park, Oxfordshire 1-2 times per week and travel to partner sites, board and other meetings, as required).
Data and DevOps Engineer (Biotech) in Oxford employer: Greywolf Therapeutics
Contact Detail:
Greywolf Therapeutics Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data and DevOps Engineer (Biotech) in Oxford
✨Tip Number 1
Network like a pro! Reach out to folks in the biotech and DevOps space on LinkedIn or at industry events. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving AWS, Docker, or CI/CD pipelines. This gives potential employers a taste of what you can do.
✨Tip Number 3
Tailor your approach! When you find a role that excites you, customise your pitch to highlight how your experience aligns with their needs. Make it personal and relevant!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love hearing from passionate candidates like you!
We think you need these skills to ace Data and DevOps Engineer (Biotech) in Oxford
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data and DevOps Engineer role. Highlight your experience with AWS, Docker, and CI/CD pipelines, as these are key for us. Use specific examples that showcase your skills in data management and collaboration with bioinformaticians.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Share your passion for biotech and how your background aligns with our mission at Greywolf. Don’t forget to mention any relevant projects or experiences that demonstrate your ability to bridge engineering and science.
Showcase Your Technical Skills: In your application, be sure to highlight your technical skills clearly. Mention your proficiency with Linux systems, CloudFormation, and any experience with data governance. We want to see how you can contribute to our data strategy!
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of applications and ensures you’re considered for the role. Plus, it’s super easy to do!
How to prepare for a job interview at Greywolf Therapeutics
✨Know Your Tech Inside Out
Make sure you’re well-versed in AWS, Docker, and CI/CD pipelines. Brush up on your knowledge of cloud architecture and data governance, as these are crucial for the role. Be ready to discuss specific projects where you've implemented these technologies.
✨Showcase Your Collaboration Skills
This role requires working closely with bioinformaticians and scientists, so be prepared to share examples of how you've successfully collaborated in the past. Highlight any experiences where you acted as a bridge between technical and non-technical teams.
✨Prepare for Technical Questions
Expect to face technical questions that test your problem-solving skills and understanding of data infrastructure. Practice explaining complex concepts in simple terms, as you may need to demonstrate your ability to communicate effectively with diverse stakeholders.
✨Demonstrate a Continuous Improvement Mindset
Be ready to discuss how you've identified and remediated issues proactively in previous roles. Share examples of how you’ve contributed to process improvements or optimised workflows, especially in regulated environments.