At a Glance
- Tasks: Join us as a Data Engineer to build and optimise data pipelines.
- Company: We're a forward-thinking company based in Newcastle, embracing data-driven solutions.
- Benefits: Enjoy flexible work arrangements and a competitive salary with great perks.
- Why this job: Work with cutting-edge tech and a skilled team in a dynamic environment.
- Qualifications: Proficiency in Python, PySpark, SQL, and experience with big data frameworks required.
- Other info: This is a full-time hybrid role with opportunities for growth.
The predicted salary is between 40000 - 60000 £ per year.
Location: Newcastle - Hybrid
Employment Type: Full-time
Salary: £50,000
About the Role:
We are looking for a Data Engineer to join our team and help build scalable data pipelines, optimise data workflows, and support analytics needs. If you have strong expertise in PySpark, Python, and SQL, and enjoy working with big data technologies, this is the role for you!
Key Responsibilities:
- Design, develop, and maintain ETL/ELT pipelines using PySpark and SQL
- Optimise and process large-scale data-sets from multiple sources
- Collaborate with data scientists, analysts, and engineers to ensure data availability and integrity
- Implement data governance, security, and quality best practices
- Work with cloud platforms like AWS, Azure, or GCP for data processing and storage
- Optimise performance for large-scale distributed computing
Required Skills & Experience:
- Proficient in Python & PySpark for data processing and transformation
- Strong knowledge of SQL and experience with databases like PostgreSQL, MySQL, or Snowflake
- Experience working with Big Data frameworks (Spark, Hadoop)
- Familiarity with Cloud Data Services (AWS Glue, Databricks, BigQuery, Redshift, etc.)
- Knowledge of data modelling, warehousing, and ETL best practices
- Experience with CI/CD pipelines and version control (Git, Jenkins, etc.) is a plus
- Strong problem-solving skills and the ability to work in a fast-paced environment
Why Join Us?
- Work with cutting-edge technologies in a data-driven environment
- Opportunity to grow and work with a highly skilled team
- Flexible work arrangements (remote/hybrid options)
- Competitive salary & benefits package
Please apply with your CV in the first instance.
Data Engineer employer: 454059
Contact Detail:
454059 Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Familiarise yourself with the specific tools and technologies mentioned in the job description, such as PySpark, SQL, and cloud platforms like AWS or Azure. Having hands-on experience or projects showcasing these skills can set you apart from other candidates.
✨Tip Number 2
Network with current or former employees of StudySmarter on platforms like LinkedIn. Engaging with them can provide insights into the company culture and the team dynamics, which can be beneficial during interviews.
✨Tip Number 3
Prepare to discuss your problem-solving skills and how you've tackled challenges in previous roles. Be ready to share specific examples that demonstrate your ability to work in a fast-paced environment, as this is a key requirement for the role.
✨Tip Number 4
Stay updated on the latest trends and advancements in data engineering and big data technologies. Being knowledgeable about industry developments can help you engage in meaningful conversations during interviews and show your passion for the field.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with PySpark, Python, SQL, and any big data technologies you've worked with. Use specific examples to demonstrate your skills in building scalable data pipelines and optimising workflows.
Craft a Strong Cover Letter: In your cover letter, express your enthusiasm for the Data Engineer role and how your background aligns with the key responsibilities. Mention your familiarity with cloud platforms like AWS or Azure and your experience with data governance and quality best practices.
Showcase Relevant Projects: If you have worked on relevant projects, either professionally or as part of your studies, include them in your application. Describe the challenges you faced, the technologies you used, and the outcomes of your work.
Proofread Your Application: Before submitting, carefully proofread your CV and cover letter for any spelling or grammatical errors. A polished application reflects your attention to detail, which is crucial for a Data Engineer role.
How to prepare for a job interview at 454059
✨Showcase Your Technical Skills
Be prepared to discuss your experience with PySpark, Python, and SQL in detail. Bring examples of projects where you've built ETL/ELT pipelines or optimised data workflows, as this will demonstrate your hands-on expertise.
✨Understand the Company’s Data Needs
Research the company’s data infrastructure and analytics requirements. Being able to articulate how your skills can directly benefit their operations will show that you’re genuinely interested in the role and understand their challenges.
✨Prepare for Problem-Solving Questions
Expect to face technical problem-solving scenarios during the interview. Brush up on your knowledge of big data frameworks and be ready to explain your thought process when tackling complex data issues.
✨Demonstrate Collaboration Skills
Since the role involves working closely with data scientists and analysts, be ready to discuss your experience in collaborative environments. Share examples of how you’ve successfully worked in teams to ensure data integrity and availability.