At a Glance
- Tasks: Design and scale data infrastructure for trading and research platforms.
- Company: Join a leading global trading technology firm driving AI and data innovation.
- Benefits: Enjoy competitive pay, growth opportunities, and a collaborative team environment.
- Why this job: Work on impactful systems in a fast-paced, research-led culture with top-tier professionals.
- Qualifications: 5+ years in data engineering, strong Python and SQL skills, and cloud tool experience required.
- Other info: Ideal for those passionate about AI/ML and data-driven solutions.
The predicted salary is between 43200 - 72000 £ per year.
We\’re working with a global trading technology firm at the forefront of AI and data-driven innovation. As they expand their Data & AI capabilities, they\’re looking for a Senior Data Engineer to help design and scale the infrastructure that powers their trading and research platforms.
This is a hands-on role suited to someone who enjoys building robust systems in a fast-paced, research-led environment, and who\’s comfortable working across teams to deliver high-quality data solutions
Key Responsibilities:
- Build and Scale Data Infrastructure : Develop and maintain scalable data pipelines to support AI, analytics, and trading systems-particularly for time-series, unstructured, and text-heavy datasets.
- Collaborate with AI and Research Teams : Partner with data scientists and researchers to support model development, fine-tune data retrieval processes, and operationalise RAG systems.
- Support Experimentation and Prototyping : Contribute to flexible data systems that enable rapid experimentation and smooth transitions from prototype to production.
- Automate and Optimise Workflows : Streamline ETL processes, reduce manual overhead, and improve the performance and reliability of data operations.
- Ensure Data Quality and Monitoring : Implement validation frameworks, monitoring tools, and alerting systems to maintain high data integrity and availability.
- Contribute to Best Practices : Help shape documentation standards, coding practices, and data governance processes.
Requirements:
- 5+ years of experience in data engineering, with a strong focus on building scalable data platforms.
- Proficiency in Python and modern data libraries (e.g. Pandas, PySpark, Dask).
- Strong SQL skills and experience with cloud-native data tools (AWS, GCP, or Azure).
- Hands-on experience with tools like Airflow, Spark, Kafka, or Snowflake.
- Experience working with unstructured data, NLP pipelines, and time-series databases.
- Familiarity with deploying AI/ML models and supporting MLOps workflows.
- Interest in or experience with Retrieval-Augmented Generation (RAG) systems.
- Strong communication skills and a collaborative, proactive mindset.
Nice to Have:
- Experience with LLM pipelines and vector databases (e.g. Pinecone, FAISS).
- Familiarity with data versioning and experiment tracking tools (e.g. DVC, MLflow).
- Background in supporting AI/ML research teams or trading environments.
On Offer:
- A role contributing to the development of next-generation AI and trading systems.
- Exposure to a high-calibre team of engineers, researchers, and data scientists.
- Competitive compensation and long-term growth potential in a fast-evolving space.
If you\’re a data engineer looking to work on impactful systems in a collaborative, forward-thinking environment, we\’d love to hear from you. #J-18808-Ljbffr
Senior Data Engineer | Global Trading Technology Firm employer: Out in Science, Technology, Engineering, and Mathematics
Contact Detail:
Out in Science, Technology, Engineering, and Mathematics Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer | Global Trading Technology Firm
✨Tip Number 1
Familiarise yourself with the latest trends in data engineering, particularly around scalable data platforms and cloud-native tools. This will not only help you understand the role better but also allow you to engage in meaningful conversations during interviews.
✨Tip Number 2
Network with professionals in the trading technology sector, especially those who work with AI and data solutions. Attend industry meetups or webinars to connect with potential colleagues and learn about their experiences, which can give you insights into the company culture.
✨Tip Number 3
Prepare to discuss your hands-on experience with relevant tools like Airflow, Spark, and Kafka. Be ready to share specific examples of how you've used these technologies to solve complex data challenges, as this will demonstrate your practical knowledge.
✨Tip Number 4
Showcase your collaborative mindset by thinking of ways you can contribute to cross-team projects. Highlight any past experiences where you've successfully partnered with data scientists or researchers, as this aligns well with the role's responsibilities.
We think you need these skills to ace Senior Data Engineer | Global Trading Technology Firm
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience in data engineering, particularly focusing on building scalable data platforms. Include specific projects where you've used Python, SQL, and cloud-native tools like AWS or GCP.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention your experience with unstructured data and any relevant projects that demonstrate your ability to collaborate with AI and research teams.
Showcase Relevant Skills: Clearly outline your proficiency in modern data libraries and tools such as Airflow, Spark, and Kafka. If you have experience with NLP pipelines or MLOps workflows, make sure to highlight these as well.
Demonstrate Your Problem-Solving Abilities: Provide examples of how you've automated workflows or improved data operations in previous roles. This will show your capability to contribute to the optimisation of data systems in a fast-paced environment.
How to prepare for a job interview at Out in Science, Technology, Engineering, and Mathematics
✨Showcase Your Technical Skills
Be prepared to discuss your experience with Python, SQL, and cloud-native tools like AWS or GCP. Highlight specific projects where you've built scalable data platforms or worked with unstructured data, as this will demonstrate your hands-on expertise.
✨Emphasise Collaboration
Since the role involves working closely with AI and research teams, share examples of how you've successfully collaborated across teams in the past. This could include joint projects, problem-solving sessions, or any initiatives that required teamwork.
✨Discuss Data Quality Practices
Prepare to talk about how you ensure data integrity and availability. Mention any frameworks or monitoring tools you've implemented in previous roles to maintain high data quality, as this is crucial for the position.
✨Demonstrate Your Problem-Solving Approach
Be ready to discuss how you approach experimentation and prototyping. Share examples of how you've contributed to flexible data systems and streamlined workflows, particularly in fast-paced environments, to show your adaptability and innovative thinking.