At a Glance
- Tasks: Shape and deliver a modern data strategy while building scalable data pipelines.
- Company: Ambitious, values-led organisation focused on using data for good.
- Benefits: Competitive salary, fully remote work, and opportunities for technical growth.
- Why this job: Make a genuine impact by turning data into actionable insights.
- Qualifications: 2+ years in data engineering with strong SQL and cloud experience.
- Other info: Supportive environment with real runway for personal and professional development.
The predicted salary is between 36000 - 60000 £ per year.
A high visibility opportunity to join an ambitious, values led organisation as it refreshes its data strategy and modernises its intelligence platform. You’ll be trusted early, work closely with stakeholders, and help build the foundations that drive better insight, smarter decisions, and genuine impact, using data for good.
This role is well suited to someone early in their data engineering journey, around 2+ years’ experience, who’s ready to step up. You’ll join a supportive, encouraging environment with real runway to grow technically, while gradually developing ownership and leadership as your influence across the business increases.
What you’ll be doing:
- Helping shape and deliver a refreshed data strategy and modern analytics platform
- Building reliable, scalable ELT/ETL pipelines into a cloud data warehouse, Snowflake, Databricks, or similar
- Designing and optimising core data models that are dimensional, analytics-ready, and built to last
- Creating trusted data products that enable self-service analytics across the organisation
- Improving data quality, monitoring, performance, and cost efficiency
- Partnering with analysts, BI teams, and non-technical stakeholders to turn questions into robust data assets
- Contributing to engineering standards, best practice, and reusable frameworks
- Supporting responsible AI tooling, including programmatic LLM workflows where appropriate
What you’ll bring:
- 2+ years’ experience in data engineering within a modern data stack
- Strong SQL with a solid foundation in data modelling
- Python preferred, or similar, for pipeline development and automation
- Cloud experience across AWS, Azure, or GCP
- Familiarity with orchestration and analytics engineering tools such as dbt, Airflow, or equivalents
- Good habits around governance, security, documentation, version control (Git), and CI/CD
The kind of person who thrives here:
Confident, curious, and motivated. You care about doing things properly, enjoy being trusted and visible in the business, and are genuinely interested in using data to create positive outcomes.
Fully remote. No sponsorship, Post-Graduate Visa not supported.
Interested? Apply now.
Data Engineer - Remote in Lincoln employer: Datatech Analytics
Contact Detail:
Datatech Analytics Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer - Remote in Lincoln
✨Tip Number 1
Network like a pro! Reach out to folks in the industry on LinkedIn or join relevant online communities. We can’t stress enough how valuable personal connections can be in landing that dream job.
✨Tip Number 2
Prepare for those interviews! Research the company and its data strategy, and think about how your skills can contribute. We recommend practising common interview questions and even doing mock interviews with friends.
✨Tip Number 3
Showcase your projects! If you’ve built any cool data pipelines or analytics tools, make sure to highlight them. We love seeing real-world applications of your skills, so don’t hold back!
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen. Plus, we’re always on the lookout for passionate data engineers ready to make an impact.
We think you need these skills to ace Data Engineer - Remote in Lincoln
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experience mentioned in the job description. Highlight your 2+ years in data engineering and any relevant projects you've worked on, especially those involving cloud data stacks.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're excited about this role and how you can contribute to our data strategy. Share specific examples of how you've built reliable ELT/ETL pipelines or improved data quality in previous roles.
Showcase Your Technical Skills: Don’t forget to mention your strong SQL skills and any experience with Python or similar languages. If you've worked with tools like dbt or Airflow, make sure to include that too – we love seeing those details!
Apply Through Our Website: We encourage you to apply directly through our website for a smoother process. It helps us keep track of your application and ensures you don’t miss out on any updates from us!
How to prepare for a job interview at Datatech Analytics
✨Know Your Data Stack
Make sure you brush up on your knowledge of modern data stacks, especially the tools mentioned in the job description like Snowflake and Databricks. Be ready to discuss how you've used these technologies in your previous roles or projects.
✨Showcase Your SQL Skills
Since strong SQL skills are a must-have, prepare to demonstrate your proficiency. You might be asked to solve a problem or optimise a query during the interview, so practice common SQL challenges beforehand.
✨Prepare for Scenario Questions
Expect questions that assess your problem-solving abilities. Think about past experiences where you built ELT/ETL pipelines or improved data quality. Use the STAR method (Situation, Task, Action, Result) to structure your answers.
✨Emphasise Collaboration
This role involves working closely with analysts and non-technical stakeholders. Be prepared to share examples of how you've successfully partnered with others to turn complex data into actionable insights. Highlight your communication skills and ability to explain technical concepts in simple terms.