At a Glance
- Tasks: Design and build data pipelines on AWS for real-time and batch data.
- Company: Arena Entertainment, a leader in digital entertainment and iGaming.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Other info: Collaborative environment with a focus on innovation and career development.
- Why this job: Join a dynamic team and leverage AI to solve real-world data challenges.
- Qualifications: 7+ years in Data Engineering with strong AWS and SQL skills.
The predicted salary is between 70000 - 90000 £ per year.
We are looking for a Senior Data Engineer to join our growing data function. It's all about taking ownership of data pipelines on AWS - designing, building, and making them run smoothly for both real-time and batch data. We need the Data Engineering function to act as the Subject Matter Expert for all stakeholders (marketing, product, retention, etc.), ensuring they have solid, reliable data for their decisions.
The example projects include migrating to low-latency, real-time data replication; building out a mature dbt modeling layer; and implementing AI-driven monitoring for automated issue detection. Arena Entertainment operates multiple iGaming and online casino brands within the digital entertainment space. This role will initially focus on the Metawin and HIT brands, which have a strong crypto focus. At Arena, AI is a must-have tool, not a nice-to-have. We expect all engineers to use AI tools (e.g. Copilot, Claude) to code faster, automate boring stuff, and generally be smarter about their work.
You are a Team Player. We need you to jam with the data science team and other departments. Be ready to jump in and help your teammates with reporting or analytics if they're swamped. You believe documenting everything is part of the job. Seriously, we're growing fast with lots of brands, so clear docs are essential for keeping things tidy, transparent and sustainable. You love building things right, but also understand the need to be flexible. We gotta clean up tech debt, but sometimes we just need to ship it fast. Be ready to make trade-offs.
You are not afraid to learn new things. We want you to be curious about how the business works and use your tech skills to solve real-world problems. You are a great communicator. You need to talk clearly to both techies and non-tech people. Let us know when you hit a blocker, and don’t suffer in silence!
Our tech stack
- Cloud Platform: AWS (S3, Lambda, DMS, Cloudwatch)
- Data Warehousing: Snowflake, Postgres
- Transformation & Modeling: dbt (Core/Cloud), SQL, Python
- Orchestration: Airflow, Dagster
- Data Ingestion (ETL/CDC): Fivetran, DMS, Debezium
- Streaming & Real-time: Kafka, Kinesis
- Infrastructure & DevOps: Terraform, Docker, Kubernetes/Helm, ArgoCD, GitHub Actions
- Data Visualization (BI): Quicksight, PowerBI
- AI & Productivity: GitHub Copilot, Claude, Gemini
Key Responsibilities
- Build scalable ETL/ELT pipelines (using dbt, Airflow, Fivetran etc.).
- Transform raw data into clean, usable models using dbt (and AI for docs/tests).
- Manage and optimise data warehouses like Snowflake and Redshift.
- Integrate different data sources (CRM, payments, games, etc.).
- Own initiatives for data quality improvement and monitoring (e.g. anomaly detection, automated alerts).
- Keep an eye on performance, cost, and security on AWS.
- Work closely with cross-functional teams to bridge product development, data, and operations, establishing yourself as the Subject Matter Expert.
- Help us move to real-time data ingestion & ETL using tools like DMS, Kafka, or Kinesis.
- Help mentor the other engineers and contribute to the team's AI strategy.
Must haves
- 7+ years in Data Engineering.
- Solid hands-on experience with AWS.
- You really know ELT design and data warehousing best practices.
- You're an expert in optimising Snowflake.
- You're a dbt pro (macros, testing, modularisation).
- Excellent SQL and Python skills.
- Good CI/CD and Git skills.
- You have used AI coding assistants to work efficiently.
Nice to haves
- Experience building pipeline to handle high-volume data.
- Know the iGaming lingo (GGR, LTV, RTP, acquisition KPIs).
- Experience with affiliate or game provider data feeds.
- Familiarity with real-time data ingestion.
- Exposure to data science/ML pipelines (SageMaker, Bedrock).
- Used AI tools for monitoring or query optimisation before.
- QuickSight experience (especially SPICE/Direct Query).
Senior Data Engineer in London employer: Arena Entertainment
Contact Detail:
Arena Entertainment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, especially those already working at Arena. A friendly chat can sometimes lead to insider info about the role and even a referral, which can give your application a serious boost.
✨Tip Number 2
Show off your skills in real-time! If you get the chance for a technical interview or coding challenge, use it to demonstrate your expertise with AWS and data pipelines. Don’t just talk about your experience; let them see how you tackle problems using tools like dbt and Airflow.
✨Tip Number 3
Be ready to discuss your past projects! Prepare some examples where you’ve built scalable ETL pipelines or optimised data warehouses. This is your chance to shine and show how you can bring value to the team at Arena.
✨Tip Number 4
Don’t forget to ask questions! When you get the chance, ask about the team dynamics, ongoing projects, or how they use AI in their processes. It shows you’re genuinely interested and helps you figure out if it’s the right fit for you.
We think you need these skills to ace Senior Data Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV speaks directly to the role of Senior Data Engineer. Highlight your experience with AWS, dbt, and any relevant projects that showcase your skills in building data pipelines and optimising data warehouses.
Show Off Your Projects: Include specific examples of projects you've worked on that relate to real-time data ingestion or data quality improvement. We love seeing how you've tackled challenges and what tools you used, especially if they align with our tech stack!
Be Clear and Concise: When writing your cover letter or application, keep it straightforward. We appreciate clear communication, so make sure to explain your experience and how it relates to the role without unnecessary fluff.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to see your application and get you into our system quickly. Plus, it shows you're keen on joining our team!
How to prepare for a job interview at Arena Entertainment
✨Know Your Tech Stack
Make sure you’re well-versed in the tech stack mentioned in the job description. Brush up on AWS services, dbt, and data warehousing best practices. Being able to discuss your hands-on experience with these tools will show that you're ready to hit the ground running.
✨Showcase Your Problem-Solving Skills
Prepare examples of how you've tackled real-world data challenges in the past. Whether it’s optimising a data pipeline or implementing AI-driven monitoring, be ready to share specific instances where your skills made a difference. This will demonstrate your ability to apply your knowledge effectively.
✨Communicate Clearly
Since you'll be working with both technical and non-technical teams, practice explaining complex concepts in simple terms. Think about how you would describe your previous projects to someone without a technical background. Clear communication is key to being a great team player.
✨Emphasise Documentation
Highlight your commitment to documentation during the interview. Share how you’ve maintained clear and concise records in your previous roles. This is crucial for keeping things tidy and transparent, especially in a fast-growing environment like Arena Entertainment.