At a Glance
- Tasks: Dive deep into messy energy market data using Python to make it usable.
- Company: Join a remote-first team focused on innovative data solutions.
- Benefits: Enjoy competitive pay, paid vacation, and endless learning opportunities.
- Other info: Flexible work hours with a focus on collaboration and professional growth.
- Why this job: Make a real impact by transforming complex data into actionable insights.
- Qualifications: Strong Python and SQL skills, with experience in data cleaning and mapping.
The predicted salary is between 50000 - 65000 £ per year.
We need someone who understands data deeply and uses Python to wrangle it — not a platform engineer, not a pure pipeline builder, but a data specialist who is comfortable with research, investigation, and the unglamorous work of making messy energy market data actually usable. You’ll spend significant time on tasks like mapping BM units to power plants and fuel types, reconciling legacy data formats with current ones, ensuring consistency between different Elexon message types, and cleaning time-series data (outliers, gaps, overlaps). Some of this requires genuine investigation — cross-referencing sources, making judgment calls, documenting edge cases. There’s no API that solves these problems for you.
Python is your primary tool (Pandas, NumPy, standard libraries) to minimise manual effort, but you should be comfortable that some detective work is unavoidable. If you find satisfaction in truly understanding a dataset’s structure and quirks — rather than just piping data through and hoping for the best — this role is for you.
Data Mapping and Research- Map BM units from Elexon to their corresponding power plants, substations, and fuel types — combining API data, public registers, and manual research
- Map substations to ETYS zones and grid supply points
- Build and maintain reference/master datasets that link identifiers across disparate sources (Elexon, National Grid ESO, TEC register, etc.)
- Document mappings, assumptions, and known limitations clearly for downstream users
- Reconcile legacy data formats with current formats (e.g., historical operational data stored in different schemas or granularities)
- Ensure consistency between different Elexon message types — understand the market data structure well enough to know why BOALF, BOD, and DISBSAD might not perfectly align and how to handle it
- Investigate discrepancies between data sources and determine authoritative values
- Clean time-series data: detect outliers (price spikes, meter errors), fill gaps appropriately, resolve overlapping or duplicate timestamps
- Develop reusable Python-based cleaning routines that can be applied across datasets
- Understand why data quality issues occur (settlement reruns, late submissions, format changes) not just patch them
- Write and maintain Python data grabbers for energy market APIs
- Build dbt models to transform raw data into clean, analysis-ready datasets
- Orchestrate workflows via GitHub Actions
- Design PostgreSQL schemas that reflect your understanding of the domain
- Strong Python skills for data work — you’re fluent with pandas, comfortable writing clean, testable code, and can build reusable data processing logic. This is not an Excel role
- Solid SQL skills — complex queries, window functions, CTEs in PostgreSQL
- Experience with messy, real-world data — you’ve done reconciliation, cleaning, or mapping work before and understand it’s not always automatable
- Methodical and detail-oriented — you notice inconsistencies and want to understand root causes
- Good documentation habits — you know that undocumented mappings and assumptions are technical debt
- Self-directed — you can own ambiguous problems, do your own research, and communicate findings clearly
- Experience with energy, utilities, or market data (any geography)
- Familiarity with UK energy markets, Elexon data, or grid operations
- dbt experience for transformation pipelines
- Exposure to time-series data challenges (irregular timestamps, gaps, restatements)
We value candidates who can build software using agentic AI coding systems. This is fundamentally different from using code completion tools or chat-based assistants.What we’re NOT looking for
- GitHub Copilot (code completion/autocomplete)
- ChatGPT or similar chat interfaces for generating isolated code snippets
- Any tool that only provides single-turn question/answer interactions
- Hands-on experience with agentic coding systems such as Claude Code, Codex (OpenAI's agentic coding tool), Open Code, or Cursor.
- Ideal candidates will demonstrate:
- Breadth of experience — proficiency with at least 2 agentic systems (experience with only one is insufficient)
- End-to-end development — ability to design and build software from the ground up using these tools, not just generating isolated snippets
- Multi-agent orchestration — demonstrated experience orchestrating multiple agents using skills, tools, and agent coordination, not just one-shot problem solving
- Deep system knowledge — familiarity with hooks, permission systems, MCP (Model Context Protocol) servers, custom skills and tool definitions, and context management
- Platform/infrastructure engineers who prefer to stay above the data layer
- People who expect clean, well-documented data as input
- Those uncomfortable with research, ambiguity, or "manual" investigation work
- Remote-first with async collaboration (Slack, GitHub, documented decisions)
- Core overlap with UK business hours expected (at least 4 hours daily)
- Competitive compensation based on location and experience
- Plenty of opportunities for learning and professional growth
- B2b contract with a paid vacation
Senior Data Specialist employer: Alex Staff
Contact Detail:
Alex Staff Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Specialist
✨Tip Number 1
Get your networking game on! Connect with professionals in the energy and data sectors on LinkedIn. Join relevant groups, participate in discussions, and don’t hesitate to reach out for informational chats. You never know who might have a lead on that perfect Senior Data Specialist role!
✨Tip Number 2
Show off your Python skills! Create a GitHub repository showcasing your projects, especially those involving data cleaning and mapping. This not only demonstrates your technical abilities but also gives potential employers a glimpse into your problem-solving process.
✨Tip Number 3
Prepare for interviews by brushing up on your knowledge of Elexon data and UK energy markets. Be ready to discuss how you’ve tackled messy datasets in the past. Use real examples to illustrate your detective work and how you’ve made data usable.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love candidates who take the initiative to engage directly with us. Let’s make your next career move a reality!
We think you need these skills to ace Senior Data Specialist
Some tips for your application 🫡
Show Off Your Python Skills: Make sure to highlight your Python expertise in your application. We want to see how you’ve used libraries like Pandas and NumPy to tackle messy data. Share specific examples of projects where you’ve cleaned or transformed data using Python.
Be Detail-Oriented: We love candidates who pay attention to the nitty-gritty details. In your application, mention any experience you have with data reconciliation or cleaning, and how you approached inconsistencies. This shows us you’re methodical and ready to dive deep into data challenges.
Document Your Process: Good documentation habits are key for us. When applying, include examples of how you’ve documented your work in the past. Whether it’s mapping assumptions or detailing your data cleaning processes, we want to see that you value clear communication.
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to keep track of your application and ensure it gets the attention it deserves. Plus, it makes the whole process smoother for both of us!
How to prepare for a job interview at Alex Staff
✨Know Your Data Inside Out
Before the interview, dive deep into the types of data you'll be working with. Familiarise yourself with energy market data, Elexon message types, and common data quality issues. Being able to discuss specific examples of how you've tackled messy data in the past will show your expertise.
✨Show Off Your Python Skills
Prepare to demonstrate your Python prowess during the interview. Brush up on using libraries like Pandas and NumPy, and be ready to discuss how you've built reusable data processing logic. You might even want to bring a small project or code snippet that showcases your skills.
✨Be Ready for Detective Work
This role requires a bit of sleuthing, so come prepared to talk about how you approach data investigation. Think of examples where you've had to cross-reference sources or document edge cases. Highlight your methodical approach to problem-solving and your attention to detail.
✨Communicate Clearly and Document Well
Good documentation habits are crucial. Be prepared to discuss how you document mappings, assumptions, and known limitations. During the interview, practice explaining complex concepts in simple terms, as clear communication is key in this role.