Senior Software Engineer (AWS)
Senior Software Engineer (AWS)

Senior Software Engineer (AWS)

Full-Time 48000 - 72000 ÂŁ / year (est.) No home office possible
Sage

At a Glance

  • Tasks: Design and operate the reliability layer for Sage’s core data platforms.
  • Company: Join Sage, a leader in data-driven solutions with a collaborative culture.
  • Benefits: Enjoy flexible work patterns, ongoing training, and paid volunteer days.
  • Why this job: Make a real impact on data systems while growing your skills in a supportive environment.
  • Qualifications: Experience in DataOps or DevOps, with strong cloud infrastructure knowledge.
  • Other info: Hybrid role with excellent career growth opportunities and a focus on well-being.

The predicted salary is between 48000 - 72000 ÂŁ per year.

We’re looking for a Senior DataOps / DevOps Engineer to design, build, and operate the reliability layer underpinning Sage’s core data platforms, including large-scale batch and streaming data systems. In this role, you’ll own the observability, monitoring, and operational resilience of cloud native data infrastructure and streaming pipelines, ensuring that data flows, whether event driven or batch, are performant, reliable, and predictable in production.

This is a hybrid role requiring 3 days per week in our Newcastle office.

First 90 Days
  • 30 days: Get familiar with Sage’s data platform architecture, including batch and streaming pipelines, cloud infrastructure, and existing operational tooling. Understand current monitoring, alerting, logging, and incident response practices, along with data reliability SLAs, failure modes, and engineering standards.
  • 60 days: Begin actively improving observability across key data systems, including dashboards, alerts, and pipeline health checks. Contribute to the operation and reliability of batch and streaming workloads, applying Infrastructure as Code, incident learnings, and DataOps best practices.
  • 90 days: Own major aspects of the data platform’s operational reliability and observability strategy. Drive improvements in alert quality, system resilience, pipeline reliability, and operational maturity. Mentor team members on DataOps and DevOps practices, and help shape how data platforms are built and operated going forward.
Meet the Team

You’ll work alongside data engineers, AI specialists, product managers, and designers in a highly collaborative environment. The team focuses on building scalable internal platforms that power data-driven decision making and AI-enabled products across Sage.

How success will be measured
  • Delivery of reliable, scalable automation and operational capabilities across data ingestion, processing, and platform services.
  • Measurable improvements in platform observability, including clear dashboards and actionable alerts tied to data SLAs such as freshness, latency, and availability.
  • Reduction in operational toil through Infrastructure as Code, repeatable deployments, and improved self-service onboarding for engineering teams.
  • Improved incident response outcomes, including faster detection, faster recovery, and fewer recurring issues through effective post-incident followups.
  • Strong operational quality across environments, with platforms operating securely, predictably, and in line with governance and compliance requirements.
  • Increased visibility into system health across batch and streaming data pipelines.
Skills you’ll gain
  • Deep expertise operating a modern Product Data Platform / Data Hub supporting both batch and streaming workloads.
  • Hands-on experience with streaming and distributed data processing systems and their operational characteristics.
  • Strong exposure to observability engineering for data systems, including metrics, logs, traces, and pipeline health monitoring.
  • Experience shaping platform reliability standards, including alerting strategies, run books, and on call readiness.
  • Practical cloud infrastructure ownership across storage, compute, and analytics layers used by large scale data platforms.
Snapshot of your day to day
  • You’ll design and operate monitoring and alerting that provides realtime visibility into pipeline health, SLA breaches, and platform behaviour.
  • You’ll improve the reliability of batch and streaming data ingestion and processing workloads, focusing on failure recovery and operational robustness.
  • You’ll build and maintain cloud infrastructure and deployment automation to keep environments consistent, secure, and repeatable.
  • You’ll work closely with data engineering and product teams to improve platform onboarding and reduce the effort required to adopt shared data capabilities.
  • You’ll help strengthen governance, compliance, and auditability by improving observability, documentation, and operational controls across the platform.
Must have skills
  • Strong experience as a DataOps, DevOps, or Platform Engineer supporting production data systems.
  • Proven expertise in observability tooling, including monitoring, logging, alerting, dashboards, and operating distributed systems in production.
  • Solid understanding of streaming and event-driven data pipelines and their common failure modes (e.g. lag, back pressure, replay).
  • Strong cloud infrastructure experience (AWS preferred), including networking, compute, storage, and managed services.
  • Hands-on experience with Infrastructure as Code and CI/CD practices for platform and data services.
  • Ability to work across ingestion, processing, and storage layers while collaborating effectively with multiple engineering teams.
  • Excellent communication and collaboration skills in English.
Nice to have skills
  • Experience operating data platforms built on technologies such as Snowflake and S3 based data lake patterns.
  • Familiarity with distributed processing and streaming ecosystems such as Kafka or Flink.
  • Experience implementing data pipeline health monitoring beyond infrastructure metrics (e.g. freshness, completeness, anomaly detection).
  • Experience supporting multi-team internal platforms with a “platform as a product” mindset.

At Sage, we offer you an environment where you can grow professionally without compromising your personal well-being. Our benefits package is designed to provide stability, flexibility, and balance:

  • Work away scheme for up to 10 weeks a year
  • On-going training and professional development
  • Paid 5 days yearly to volunteer through our Sage Foundation
  • Flexible work patterns and hybrid working

Senior Software Engineer (AWS) employer: Sage

At Sage, we pride ourselves on being an exceptional employer, offering a dynamic and collaborative work culture that fosters professional growth and personal well-being. Our Newcastle office provides a hybrid working environment, allowing flexibility while you contribute to innovative data solutions. With comprehensive benefits, ongoing training opportunities, and a commitment to community engagement through volunteer days, Sage is dedicated to supporting your career journey in a meaningful way.
Sage

Contact Detail:

Sage Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Senior Software Engineer (AWS)

✨Tip Number 1

Network like a pro! Reach out to folks in your industry on LinkedIn or at local meetups. A friendly chat can lead to opportunities that aren’t even advertised yet.

✨Tip Number 2

Show off your skills! Create a portfolio or GitHub repository showcasing your projects, especially those related to DataOps and DevOps. This gives potential employers a taste of what you can do.

✨Tip Number 3

Prepare for interviews by practising common questions and scenarios specific to the role. Think about how your experience aligns with the responsibilities of ensuring operational resilience and observability.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive!

We think you need these skills to ace Senior Software Engineer (AWS)

DataOps
DevOps
Platform Engineering
Observability Tooling
Monitoring
Logging
Alerting
Dashboards
Distributed Systems
Streaming Data Pipelines
Event-Driven Architectures
Cloud Infrastructure (AWS)
Infrastructure as Code
CI/CD Practices
Collaboration Skills

Some tips for your application 🫡

Tailor Your CV: Make sure your CV reflects the skills and experiences that match the Senior DataOps / DevOps Engineer role. Highlight your expertise in observability tooling and cloud infrastructure, especially AWS, to catch our eye!

Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data platforms and how your experience aligns with our goals. Share specific examples of how you've improved operational resilience or observability in past roles.

Showcase Your Projects: If you've worked on relevant projects, don’t hesitate to include them! Whether it's building monitoring dashboards or implementing Infrastructure as Code, we want to see what you've accomplished and how it relates to our needs.

Apply Through Our Website: We encourage you to apply directly through our website for the best chance of getting noticed. It’s the easiest way for us to keep track of your application and ensure it reaches the right team!

How to prepare for a job interview at Sage

✨Know Your Stuff

Before the interview, dive deep into Sage’s data platform architecture. Familiarise yourself with batch and streaming pipelines, cloud infrastructure, and operational tooling. This knowledge will not only impress your interviewers but also help you answer technical questions confidently.

✨Showcase Your Experience

Be ready to discuss your hands-on experience with observability tooling and Infrastructure as Code. Prepare specific examples of how you've improved system reliability or reduced operational toil in previous roles. This will demonstrate your practical skills and how they align with the job requirements.

✨Ask Smart Questions

Prepare insightful questions about the team’s current challenges and future projects. This shows your genuine interest in the role and helps you understand how you can contribute effectively. For instance, ask about their approach to incident response or how they measure platform observability.

✨Emphasise Collaboration

Since this role involves working closely with various teams, highlight your collaboration skills. Share examples of how you've successfully worked with data engineers, product managers, or other stakeholders to achieve common goals. This will illustrate your ability to thrive in a highly collaborative environment.

Senior Software Engineer (AWS)
Sage

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>