At a Glance
- Tasks: Lead impactful data science projects and build advanced AI solutions.
- Company: Join LexisNexis' dynamic Data Science & AI team.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Other info: Collaborative environment with diverse challenges and excellent career advancement.
- Why this job: Make a real impact with cutting-edge technology in a global organisation.
- Qualifications: Strong Python skills and experience with machine learning and cloud platforms.
The predicted salary is between 60000 - 80000 £ per year.
hackajob is collaborating with LexisNexis to connect them with exceptional professionals for this role. Are you ready to take your data science expertise to the next level and lead impactful projects? Would you enjoy working on advanced machine learning models and cutting-edge analytics solutions?
About Our Team
We are a fast-moving, high-impact Data Science & AI team building real-world GenAI and ML solutions across the entire LexisNexis business. Our work powers smarter decisions for Product, Sales, Finance, Marketing, Customer Success, and Engineering—everything from predictive models to enterprise GenAI apps to automation that transforms how teams operate. We are data science generalists who love variety. One day, it is designing a new GenAI workflow, the next it is deploying a model into Salesforce or engineering a pipeline in Databricks. We own our projects end-to-end and partner directly with stakeholders to deliver solutions that get used and make a measurable difference. If you want to experiment, build, ship, and see your work drive real impact across a global organisation, you will feel right at home with us.
About the role
We are seeking a Senior Data Scientist II who is a Data Science Generalist. The ideal candidate is comfortable working across GenAI, traditional machine learning, analytics, data engineering, cloud platforms, and enterprise system integrations. In this role, you will design, build, and deploy AI and ML solutions that support key business functions across Product, Sales, Finance, Marketing, Customer Success, and Engineering. You will work end-to-end across ideation, modelling, experimentation, prompt engineering, deployment, monitoring, and stakeholder communication. This position is ideal for a versatile data scientist who enjoys solving diverse problems, working with multiple systems, and driving measurable business impact.
Responsibilities
- Build GenAI applications using OpenAI APIs, embeddings, vector search, and retrieval-augmented generation (RAG).
- Design advanced prompt engineering patterns and automated evaluation frameworks for LLM quality and safety.
- Develop and deploy traditional ML models (e.g., churn, propensity, sentiment/feedback, lead scoring, customer intelligence).
- Own the end-to-end model lifecycle: data prep, experimentation, deployment, and monitoring.
- Build and optimize feature pipelines and scoring jobs using Python, Databricks, Spark, Delta Lake, and AWS.
- Use AWS services (S3, Redshift, Lambda) for data automation, orchestration, and scalable processing.
- Ensure data quality, observability, lineage, and documentation across data and ML pipelines.
- Deliver enterprise integrations with Salesforce (SFDC) and Oracle platforms (Fusion, Service Cloud, Peoplesoft) for batch and real-time workflows.
- Create analytics solutions with cross-functional partners: define KPIs, connect customer/product/finance/CRM data, and drive actionable recommendations.
- Productionise reliably: provide L2/L3 support, monitor drift/data quality/prompt performance, run root-cause analysis, and implement preventative fixes.
Requirements
- Strong Python programming skills.
- Direct experience with OpenAI APIs, LLM workflows, and prompt engineering.
- Solid machine learning fundamentals, including supervised learning, NLP, and feature engineering.
- Experience with Databricks, Spark, and Delta Lake.
- Strong SQL skills with experience working on large datasets.
- Experience with AWS, including S3 and Lambda.
- Familiarity with Redshift, Snowflake, or other cloud data warehouses.
- Experience with behavioral datasets.
- Ability to work across machine learning, data engineering, analytics, and integrations.
- Ability to design end-to-end solutions spanning data, models, APIs, and automation workflows.
Senior Data Scientist II employer: hackajob
Contact Detail:
hackajob Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Scientist II
✨Tip Number 1
Network like a pro! Reach out to your connections in the data science field, attend meetups, and engage in online forums. You never know who might have the inside scoop on job openings or can refer you directly to hiring managers.
✨Tip Number 2
Showcase your skills through projects! Create a portfolio that highlights your work with GenAI, machine learning models, and analytics solutions. This not only demonstrates your expertise but also gives you something tangible to discuss during interviews.
✨Tip Number 3
Prepare for interviews by practising common data science questions and case studies. Think about how you would approach real-world problems, especially those relevant to LexisNexis. We want to see your thought process and problem-solving skills in action!
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team and making an impact with your data science skills.
We think you need these skills to ace Senior Data Scientist II
Some tips for your application 🫡
Show Off Your Skills: Make sure to highlight your Python programming skills and experience with OpenAI APIs in your application. We want to see how your expertise aligns with the role, so don’t hold back on showcasing your best projects!
Tailor Your Application: Take a moment to customise your application for this specific role. Mention your experience with machine learning, data engineering, and analytics, and how they relate to the responsibilities listed in the job description. We love seeing candidates who take the time to connect their background to what we do!
Be Clear and Concise: When writing your application, keep it clear and to the point. Use bullet points where possible to make it easy for us to read through your qualifications and experiences. We appreciate a well-structured application that gets straight to the good stuff!
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows us you’re serious about joining our team!
How to prepare for a job interview at hackajob
✨Know Your Stuff
Make sure you brush up on your Python programming skills and be ready to discuss your experience with OpenAI APIs and LLM workflows. They’ll likely ask you about specific projects you've worked on, so have a couple of examples ready that showcase your machine learning fundamentals and how you've applied them in real-world scenarios.
✨Showcase Your Versatility
This role is all about being a data science generalist, so highlight your ability to work across different areas like GenAI, traditional machine learning, and data engineering. Be prepared to discuss how you've tackled diverse problems and the impact your solutions had on previous projects.
✨Understand the Business Impact
Since the team focuses on delivering measurable business impact, think about how your work has driven results in past roles. Be ready to discuss KPIs you've defined and how your analytics solutions have influenced decision-making in product, sales, or marketing.
✨Prepare for Technical Questions
Expect some technical questions around building and deploying models, especially using tools like Databricks, Spark, and AWS. Brush up on your SQL skills too, as they may ask you to solve problems involving large datasets. Practising coding challenges can also help you feel more confident.