At a Glance
- Tasks: Join our team to build and manage automated data pipelines for biomedical research.
- Company: Causaly is an innovative AI platform transforming scientific research for biopharma companies.
- Benefits: Enjoy competitive pay, private medical insurance, 25 days holiday, and your birthday off!
- Why this job: Be part of a mission-driven team making a real impact on global health challenges.
- Qualifications: Experience in SQL, Python, and cloud computing; problem-solving mindset required.
- Other info: We value diversity and welcome applicants from all backgrounds.
The predicted salary is between 36000 - 60000 £ per year.
Founded in 2018, Causaly’s AI platform for enterprise-scale scientific research redefines the limits of human productivity. We give humans a powerful new way to find, visualize and interpret biomedical knowledge and automate critical research workflows, accelerating solutions for some of the greatest health challenges we face. We work with some of the world's largest biopharma companies and institutions on use cases spanning Drug Discovery, Safety and Competitive Intelligence.
We are looking for talented Data Engineers with a passion for DataOps and a demonstrable background in SQL and Python-based automation. You will join our Data & Semantic Technologies team, responsible for delivering the scalable and highly flexible data fabric that is the foundation of Causaly’s product suite. This team is enabling and empowering new product developments as well as innovations in AI to create true business value.
You will be unleashing the value of data for our customers through building and operating automated data pipelines, feeding our constantly growing data warehouse and knowledge graph, evolving our data architectures, etc. We are a multi-disciplinary team working in a fast-paced and collaborative environment, who value honest opinion and open debate. You have a problem-solving mind-set with a hands-on attitude, you are keen to design and build innovative solutions that leverage the value of data, you are passionate and creative in your work, you love to share ideas with your team and can pick the right tool for the job.
What you can expect to work on:
- Gather and understand data based on business requirements
- Regularly import and transform big data (millions of records) from various formats (e.g. CSV, SQL, JSON) to data stores like BigQuery and Neo4j
- Process data further using SQL and/or Python, e.g., to sanitise fields, aggregate records, combine with external data sources
- Work with other engineers on highly performant data pipelines and efficient data operations, adhering to the industry’s best practices and technologies for scalability, fault tolerance and reliability
- Export data in well-defined target formats and schemata, ensure and validate data output and quality, produce corresponding reports and dashboards
- Manage and improve (legacy) data pipelines in the cloud - enable other engineers to run them efficiently
- Innovate on our data warehouse architecture and usage
- Work directly with a multitude of technical, product and business stakeholders
- Mentor and guide junior members, shape our technology strategy and innovate on our data backbone
- Collaborate with the DevOps team to help manage our infrastructure
What we're looking for:
- Significant industry experience working with SQL, automation, ETL, Linux
- Proven database skills and experience with traditional RDBMS like MySQL as well as modern systems like BigQuery
- Experience with data versioning, data-backup and data-recovery strategies
- Solid understanding of modern software-development practices (testing, version control, documentation, etc.) and hands-on coding experience in Python
- Experience with cloud computing providers like GCP/AWS
- Strong engineering background enabling rapid progression from ideation to proof-of-concept
- A product and user-centric mindset
- Excellent problem solving, ownership, organizational skills, with high attention to detail and quality
Preferred Qualifications:
- Experience with more data-storage and retrieval technologies, such as ElasticSearch, data warehouses, NoSQL, Neo4j
- Command-line and Linux scripting skills in production
- Have utilised DevOps tools and practices to build and deploy software
- Knowledge of Terraform, Kubernetes and/or Docker Containers
- Programming skills and experience in other languages, such as Node.js
Benefits:
- Competitive compensation package
- Private medical insurance
- Life insurance (4 x salary)
- Personal development budget
- Individual wellbeing budget
- 25 days holiday plus bank holidays
- Your birthday off!
- Potential to have real impact and accelerated career growth as a member of an international team that's building a transformative AI product.
We are on a mission to accelerate scientific breakthroughs for ALL humankind, and we are proud to be an equal opportunity employer. We welcome applications from all backgrounds and fairly consider qualified candidates without regard to race, ethnic or national origin, gender, gender identity or expression, sexual orientation, disability, neurodiversity, genetics, age, religion or belief, marital/civil partnership status, domestic/family status, veteran status or any other difference.
DataOps Engineer employer: Causaly
Contact Detail:
Causaly Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land DataOps Engineer
✨Tip Number 1
Familiarise yourself with the specific technologies mentioned in the job description, such as SQL, Python, and cloud platforms like GCP or AWS. Having hands-on experience with these tools will not only boost your confidence but also demonstrate your readiness for the role.
✨Tip Number 2
Engage with the Causaly community by following their blog and social media channels. This will help you understand their projects and values, allowing you to tailor your conversations during interviews and show genuine interest in their mission.
✨Tip Number 3
Prepare to discuss your previous experiences with data pipelines and automation. Be ready to share specific examples of how you've solved problems or improved processes in past roles, as this aligns closely with what Causaly is looking for.
✨Tip Number 4
Network with current or former employees of Causaly on platforms like LinkedIn. This can provide you with insider insights about the company culture and expectations, which can be invaluable during the interview process.
We think you need these skills to ace DataOps Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with SQL, Python, and any relevant DataOps projects. Use specific examples that demonstrate your problem-solving skills and ability to work in a collaborative environment.
Craft a Compelling Cover Letter: In your cover letter, express your passion for data engineering and how it aligns with Causaly's mission. Mention specific projects or experiences that showcase your skills in building automated data pipelines and working with big data.
Showcase Relevant Skills: Clearly outline your technical skills related to the job description, such as experience with cloud computing providers like GCP/AWS, ETL processes, and modern software development practices. This will help you stand out as a strong candidate.
Highlight Team Collaboration: Since the role involves working with various stakeholders, emphasise your ability to collaborate effectively. Share examples of how you've worked with cross-functional teams to achieve common goals in previous roles.
How to prepare for a job interview at Causaly
✨Showcase Your SQL and Python Skills
Since the role requires a strong background in SQL and Python, be prepared to discuss your experience with these technologies. Bring examples of projects where you've used SQL for data manipulation or Python for automation, and be ready to explain your thought process.
✨Understand DataOps Principles
Familiarise yourself with DataOps methodologies and how they apply to data engineering. Be ready to discuss how you can contribute to building efficient data pipelines and improving data quality, as this is crucial for the role.
✨Prepare for Problem-Solving Questions
Expect to face technical questions that assess your problem-solving abilities. Practice explaining your approach to common data challenges, such as optimising data pipelines or handling large datasets, to demonstrate your analytical skills.
✨Emphasise Collaboration and Communication
Causaly values teamwork and open debate, so highlight your experience working in multi-disciplinary teams. Share examples of how you've collaborated with engineers, product managers, or other stakeholders to achieve project goals.