At a Glance
- Tasks: Build and optimise data pipelines, develop REST APIs, and ensure data security.
- Company: Join a pioneering health tech company revolutionising clinical research with innovative data solutions.
- Benefits: Enjoy remote work, competitive salary up to £80,000, and opportunities for career growth.
- Why this job: Make a meaningful impact on healthcare while working with cutting-edge technology in a dynamic team.
- Qualifications: Proficiency in Databricks, Python, SQL, and experience with Azure Data Factory required.
- Other info: This role offers a unique chance to shape the future of clinical research.
The predicted salary is between 48000 - 64000 £ per year.
Exciting Opportunity in Health Tech!
Location: Remote
Salary: up to £80,000 + Bens
Reporting To: Vice President of Software Development
Are you passionate about health tech and innovation? Do you want to be at the forefront of transforming clinical research with cutting-edge technology? If so, we have an exciting new role for you! Join our dynamic and forward-thinking team as a Data Engineer and help us build secure, scalable microservices that operationalize clinical research applications. This is your chance to make a meaningful impact on healthcare while working with some of the most advanced technologies in data engineering.
About Us
We are a pioneering health tech company dedicated to revolutionizing clinical research through innovative data solutions. Our cross-functional team, including Frontend Developers, QA Engineers, and DevOps Engineers, collaborates to create high-performance data pipelines and REST APIs that drive AI applications and external data integrations.
Your Role
- Build and Optimize Data Pipelines: Implement high-performance data pipelines for AI applications using Databricks.
- Develop REST APIs: Create REST APIs required for seamless external data integrations.
- Ensure Data Security: Apply protocols and standards to secure clinical data in-motion and at-rest.
- Shape Data Workflows: Use your expertise with Databricks components such as Delta Lake, Unity Catalog, and ML Flow to ensure our data workflows are efficient, secure, and reliable.
Key Responsibilities
- Data Engineering with Databricks: Utilize Databricks to design and maintain scalable data infrastructure.
- Integration with Azure Data Factory: Leverage Azure Data Factory for orchestrating and automating data movement and transformation.
- Python Development: Write clean, efficient code in Python (3.x), using frameworks like FastAPI and Pydantic.
- Database Management: Design and manage relational schemas and databases, with a strong focus on SQL and PostgreSQL.
- CI/CD and Containerization: Implement CI/CD pipelines and manage container technologies to support a robust development environment.
Requirements
- Expertise in Databricks: Proficiency with Databricks components such as Delta Lake, Unity Catalog, and ML Flow.
- Azure Data Factory Knowledge: Experience with Azure Data Factory for data orchestration.
- Clinical Data Security: Understanding of protocols and standards related to securing clinical data.
- Python Proficiency: Strong skills in Python (3.x), FastAPI, Pydantic, and Pytest.
- SQL and Relational Databases: Knowledge of SQL, relational schema design, and PostgreSQL.
- CI/CD and Containers: Familiarity with CI/CD practices and container technologies.
- Data Modeling and ETL/ELT: Experience with data modeling, ETL/ELT processes, and data lakes.
Why Join Us?
- Innovative Environment: Be part of a team that is pushing the boundaries of health tech and clinical research.
- Career Growth: Opportunities for professional development and career advancement.
- Cutting-Edge Technology: Work with the latest tools and platforms in data engineering.
- Impactful Work: Contribute to projects that have a real-world impact on healthcare and clinical research.
If you are a versatile Data Engineer with a passion for health tech and innovation, we would love to hear from you. This is a unique opportunity to shape the future of clinical research with your expertise in data engineering. Shape the Future of Health Tech with Us! Apply Today!
Data Engineer employer: Computer Futures / SThree Group
Contact Detail:
Computer Futures / SThree Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Familiarise yourself with Databricks and its components like Delta Lake and ML Flow. Being able to discuss specific projects or experiences where you've used these tools will show your expertise and passion for the role.
✨Tip Number 2
Network with professionals in the health tech industry, especially those working with data engineering. Engaging in relevant online communities or attending webinars can help you gain insights and potentially get referrals.
✨Tip Number 3
Stay updated on the latest trends in clinical data security and data orchestration with Azure Data Factory. Being knowledgeable about current best practices will demonstrate your commitment to the field and readiness for the challenges of the role.
✨Tip Number 4
Prepare to discuss your experience with CI/CD pipelines and container technologies. Highlighting specific examples of how you've implemented these practices in past projects will set you apart from other candidates.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Databricks, Python, and Azure Data Factory. Emphasise any relevant projects or roles that showcase your skills in data engineering and clinical data security.
Craft a Compelling Cover Letter: Write a cover letter that reflects your passion for health tech and innovation. Mention specific technologies you’ve worked with, such as Delta Lake and FastAPI, and explain how your background aligns with the company's mission.
Showcase Relevant Projects: Include examples of past projects where you built data pipelines or developed REST APIs. Highlight your role in these projects and the impact they had on the organisation or clients.
Highlight Continuous Learning: Mention any recent courses, certifications, or workshops related to data engineering, especially those focusing on Databricks or cloud technologies. This shows your commitment to staying updated in a fast-evolving field.
How to prepare for a job interview at Computer Futures / SThree Group
✨Showcase Your Technical Skills
Be prepared to discuss your experience with Databricks, Python, and Azure Data Factory. Bring examples of projects where you've built data pipelines or developed REST APIs, as this will demonstrate your hands-on expertise.
✨Understand the Company’s Mission
Research the health tech company and its focus on clinical research innovation. Being able to articulate how your skills align with their mission will show your genuine interest in the role and the impact you can make.
✨Prepare for Scenario-Based Questions
Expect questions that assess your problem-solving abilities in real-world scenarios. Think about challenges you've faced in data engineering and how you overcame them, particularly in relation to data security and workflow efficiency.
✨Ask Insightful Questions
Prepare thoughtful questions about the team dynamics, the technologies they use, and future projects. This not only shows your enthusiasm but also helps you gauge if the company culture aligns with your values.