At a Glance
- Tasks: Design and maintain data pipelines to transform raw data into valuable insights.
- Company: Join Lilly, a global healthcare leader dedicated to making life better for people worldwide.
- Benefits: Competitive salary, health benefits, and opportunities for professional growth.
- Why this job: Be part of innovative projects that directly impact healthcare and improve lives.
- Qualifications: Proficiency in SQL and Python, with experience in cloud platforms like AWS or Azure.
- Other info: Collaborative environment with a focus on mentorship and career development.
The predicted salary is between 40000 - 50000 £ per year.
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world.
About the Tech@Lilly Organization: Tech@Lilly builds and maintains capabilities using cutting edge technologies like most prominent tech companies. What differentiates Tech@Lilly is that we create new possibilities through tech to advance our purpose – creating medicines that make life better for people around the world, like data driven drug discovery and connected clinical trials. We hire the best technology professionals from a variety of backgrounds, so they can bring an assortment of knowledge, skills, and diverse thinking to deliver innovative solutions in every area of the enterprise.
About the Business Function: Tech@Lilly Business Units is a global organization strategically positioned so that through information and technology leadership and solutions, we create meaningful connections and remarkable experiences, so people feel genuinely cared for. The Business Unit organization is accountable for designing, developing, and supporting commercial or customer engagement services and capabilities that span multiple Business Units (Bio-Medicines, Diabetes, Oncology, International), functions, geographies, and digital channels.
The areas supported by Business Unit includes: Customer Operations, Marketing and Commercial Operations, Medical Affairs, Market Research, Pricing, Reimbursement and Access, Customer Support Programs, Digital Production and Distribution, Global Patient Outcomes, and Real-World Evidence.
A Data Engineer is responsible for designing, developing, and maintaining the data solutions that ensure the availability and quality of data for analysis and/or business transactions. They design and implement efficient data storage, processing, and retrieval solutions for datasets and build data pipelines, optimize database designs, and work closely with data scientists, architects, and analysts to ensure data quality and accessibility. Data engineers require strong skillsets in data integration, acquisition, cleansing, harmonization, and transforming data. They play a crucial role in transforming raw data into datasets designed for analysis which enable organizations to unlock valuable insights for decision making.
What you’ll be doing:
- Design, build, and maintain scalable and reliable data pipelines for batch and real-time processing.
- Develop and optimize data models, ETL/ELT workflows, and data integration across multiple systems and platforms.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions.
- Implement data governance, security, and quality standards across data assets.
- Lead end-to-end data engineering projects and contribute to architectural decisions.
- Drive cloud-native solutions on AWS, Azure, or GCP using tools like Glue, EMR, Databricks.
- Promote best practices in coding, testing, and deployment.
- Monitor, troubleshoot, and improve performance, and reliability of data infrastructure.
- Automate manual processes and identify opportunities to optimize data workflows and reduce costs.
How You Will Succeed:
- Deliver scalable solutions by designing robust data pipelines and architectures that meet performance and reliability standards.
- Collaborate effectively with cross-functional teams to turn business needs into technical outcomes.
- Lead with expertise, mentoring peers and driving adoption of best practices in data engineering and cloud technologies.
- Continuously improve systems through automation, performance tuning, and proactive issue resolution.
- Communicate with clarity to ensure alignment across technical and non-technical stakeholders.
What You Should Bring:
- Strong proficiency in SQL and Python.
- Hands‑on experience with cloud platforms (AWS, Azure, or GCP) and tools like Glue, EMR, Redshift, Lambda, or Databricks.
- Deep understanding of ETL/ELT workflows, data modelling, and data warehousing concepts.
- Familiarity with big data and streaming frameworks (e.g., Apache Spark, Kafka, Flink).
- Knowledge of data governance, security, and quality practices.
- Working knowledge of Databricks for building and optimizing scalable data pipelines and analytics workflows.
- Experience with CI/CD, version control (Git), and infrastructure‑as‑code tools is a plus.
- A problem‑solving mindset, attention to detail, and a passion for clean, maintainable code.
- Strong communication and collaboration skills to work with both technical and non‑technical stakeholders.
Basic Qualifications and Experience Requirement:
- Bachelor’s degree in Computer Science, Information Technology, Management Information Systems, or equivalent work experience.
- Overall 1‑3 years of experience in data engineering using core technologies such as SQL, Python, PySpark, and AWS services including Lambda, Glue, S3, Redshift, Athena, and IAM roles/policies.
- 1+ years of experience working in Agile environments, with hands‑on experience using GitHub and CI/CD pipelines for code deployment.
- 1+ years of experience with orchestration tools like Airflow for workflow automation.
- Proven experience in architecting and building high‑performance, scalable data pipelines following Data Lakehouse, Data Warehouse, and Data Mart standards.
- Strong expertise in data modelling, managing large datasets, and implementing secure, compliant data governance practices.
- Experience in leading a small team of data engineers and providing technical mentorship.
- Ability to collaborate with business stakeholders to translate key business requirements into scalable technical solutions.
- Familiarity with security models and developing solutions on large‑scale, distributed data systems.
Additional Skills/Preferences:
- Pharmaceutical or healthcare industry experience.
- Partner with and influence vendor resources on solution development to ensure understanding of data and technical direction for solutions as well as delivery.
- AWS Certified Data Engineer - Associate.
- Databricks Certified Data Engineer (Associate or Professional).
Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form here for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status.
BU Global Data Engineer in Bracknell employer: Eli Lilly and Company
Contact Detail:
Eli Lilly and Company Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land BU Global Data Engineer in Bracknell
✨Tip Number 1
Network like a pro! Reach out to current employees at Lilly or in the data engineering field. Use LinkedIn to connect and ask for informational chats. It’s all about making those connections that can lead to job opportunities.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data projects, especially those using SQL, Python, or cloud platforms. This gives you a chance to demonstrate your expertise and passion for data engineering.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your past projects and how you tackled challenges. Confidence is key, so know your stuff!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, it shows you’re genuinely interested in joining the Lilly team and making a difference.
We think you need these skills to ace BU Global Data Engineer in Bracknell
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the role of Junior Data Engineer. Highlight your experience with SQL, Python, and cloud platforms like AWS or Azure. We want to see how your skills align with what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our mission at Lilly. Keep it concise but impactful – we love a good story!
Showcase Your Projects: If you've worked on any relevant projects, make sure to mention them! Whether it's a personal project or something from your previous job, we want to see how you've applied your skills in real-world scenarios.
Apply Through Our Website: Don't forget to apply through our website! It's the best way to ensure your application gets into the right hands. Plus, it shows us you're serious about joining our team at Lilly!
How to prepare for a job interview at Eli Lilly and Company
✨Know Your Tech Inside Out
Make sure you brush up on your SQL and Python skills, as well as your experience with cloud platforms like AWS, Azure, or GCP. Be ready to discuss specific projects where you've designed data pipelines or optimised ETL workflows, as this will show your hands-on experience.
✨Showcase Your Collaboration Skills
Since the role involves working closely with data scientists and business stakeholders, prepare examples of how you've successfully collaborated in the past. Highlight any instances where you translated technical jargon into layman's terms to ensure everyone was on the same page.
✨Demonstrate Problem-Solving Abilities
Be prepared to discuss challenges you've faced in previous roles and how you overcame them. This could involve troubleshooting data infrastructure issues or automating manual processes. Showing a proactive approach to problem-solving will impress your interviewers.
✨Communicate Clearly and Confidently
Practice articulating your thoughts clearly, especially when discussing complex technical concepts. Remember, you'll need to communicate effectively with both technical and non-technical stakeholders, so clarity is key. Consider doing mock interviews to refine your delivery.