At a Glance
- Tasks: Join our AI & Data Ops team to develop and maintain innovative software solutions.
- Company: Dynamic tech company focused on AI and data operations.
- Benefits: Competitive salary, health benefits, remote work options, and growth opportunities.
- Why this job: Make a real impact by working with cutting-edge technologies in a collaborative environment.
- Qualifications: 2-4 years of software engineering experience with strong Python skills.
- Other info: Exciting projects with excellent career advancement potential.
The predicted salary is between 45000 - 54000 ÂŁ per year.
We are looking for a Software Engineer (AWS) to join our AI & Data Ops team. In this role, you will take ownership of two core systems: our internal Python library used across the organisation for data science workflows, and a production data validation system — a system embedded in AWS. This is a hands‑on engineering role. You will write Python daily, own production cloud infrastructure, and collaborate closely with data scientists, devops engineers and analysts to ship reliable, well‑tested systems.
Your responsibilities would include:
- As part of the AI & Data Ops team, you will build and maintain internal software that supports the product team with building and deploying KPI trackers.
- Co‑own and maintain the internal Python library used across the organisation for data science workflows, including versioning, releases, and API design.
- Own and operate the Anomaly Detection Framework — a production system built on AWS that detects anomalies before they reach client facing systems.
- Write and maintain AWS Lambda handlers and serverless functions in Python.
- Write Terraform to add, modify, or configure AWS resources (Lambda, Step Functions, S3 buckets, IAM policies, DynamoDB tables, SQS queues), submitting PRs for DevOps review.
- Debug Step Functions executions, investigate CloudWatch logs, and resolve production issues.
- Write and maintain GitHub Actions workflows for CI/CD, automated testing, and package releases.
- Write unit, integration, and regression tests to ensure library and infrastructure stability.
- Conduct code reviews and enforce coding standards for contributions to shared codebases.
- Triage and resolve bug reports and feature requests from internal library users.
- Collaborate with data scientists and analysts to translate requirements into production‑ready code.
Required Qualifications Experience:
- 2–4 years in software engineering, data engineering, or a related field.
- Proven track record of independently owning and delivering data projects.
Python:
- Strong Python proficiency with clean, well‑tested, production‑quality code.
- Experience contributing to and maintaining shared Python libraries or packages.
- Proficient with the data science stack: pandas, numpy, matplotlib, scikit‑learn.
- Familiarity with ORM patterns (e.g., SQLAlchemy) and database connectivity in Python.
- Experience with data serialization formats (JSON, HDF5, Parquet).
- Strong understanding of testing frameworks (pytest, unittest) and test‑driven development.
AWS & Cloud:
- Hands‑on experience with core AWS services: Lambda, Step Functions, S3, SQS, DynamoDB, ECS, CloudWatch.
- Comfortable writing production Lambda handlers and debugging serverless architectures.
- Experience with event‑driven architectures (SQS triggers, S3 notifications, Step Functions orchestration).
- Ability to read and write Terraform for AWS resource provisioning.
Other Required Skills:
- Strong Git skills including branching strategies and code review workflows.
- Experience writing CI/CD pipelines (GitHub Actions or similar).
- A collaborative mindset and strong communication skills.
- Self‑motivated with ability to work autonomously.
Nice to Have Skills:
- Having experience in at least one of the following areas would be a strong plus:
- Statistical anomaly detection methods (Z‑scores, IQR, CUSUM, control charts).
- Time series analysis and forecasting techniques (Prophet, ARIMA, exponential smoothing).
- Machine learning for anomaly detection (Isolation Forest, autoencoders, Bayesian changepoint detection).
- Familiarity with Slack APIs or similar messaging platform integrations.
- API Gateway for webhooks and callback integrations.
- Docker containerisation for packaging and deploying data pipelines.
- Grafana or similar dashboarding and observability tools.
- Experience with data quality and observability frameworks.
- Advanced SQL and database optimisation.
- Understanding of or strong interest in financial markets and business metrics.
Software Engineer in England employer: Oxford Data Plan
Contact Detail:
Oxford Data Plan Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Software Engineer in England
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. A friendly chat can lead to opportunities that aren’t even advertised yet.
✨Tip Number 2
Show off your skills! Create a GitHub repo with some of your best projects, especially those using Python and AWS. This gives potential employers a taste of what you can do.
✨Tip Number 3
Prepare for technical interviews by practicing coding challenges. Websites like LeetCode or HackerRank are great for brushing up on your Python skills and problem-solving abilities.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive!
We think you need these skills to ace Software Engineer in England
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the job description. Highlight your Python proficiency, AWS experience, and any relevant projects you've worked on. We want to see how you can contribute to our AI & Data Ops team!
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about this role and how your background aligns with our needs. Share specific examples of your work with data projects or cloud infrastructure to make your application stand out.
Showcase Your Projects: If you've got a GitHub or portfolio showcasing your coding projects, include it! We love seeing real-world applications of your skills, especially in Python and AWS. It gives us a better idea of your coding style and problem-solving abilities.
Apply Through Our Website: We encourage you to apply directly through our website for a smoother process. This way, we can easily track your application and get back to you quicker. Plus, it shows us you're keen on joining the StudySmarter team!
How to prepare for a job interview at Oxford Data Plan
✨Know Your Python Inside Out
Make sure you brush up on your Python skills, especially around clean coding and testing. Be ready to discuss your experience with shared libraries and how you've contributed to them. Practising coding challenges can help you demonstrate your proficiency during the interview.
✨Familiarise Yourself with AWS Services
Since this role involves a lot of AWS work, get comfortable with core services like Lambda, S3, and DynamoDB. Be prepared to talk about your hands-on experience and any production systems you've built or maintained. It might be helpful to review some common use cases and best practices.
✨Show Off Your Collaboration Skills
This position requires working closely with data scientists and analysts, so be ready to share examples of how you've successfully collaborated in the past. Highlight your communication skills and how you’ve translated requirements into production-ready code.
✨Prepare for Technical Questions
Expect technical questions related to CI/CD pipelines, Terraform, and debugging serverless architectures. Brush up on your knowledge of testing frameworks and be ready to discuss your approach to writing unit and integration tests. Practising mock interviews can help you feel more confident.