ETL Developer

ETL Developer

Temporary 48000 - 72000 £ / year (est.) No home office possible
N

At a Glance

  • Tasks: Design and develop data processing pipelines using Python and Spark for large-scale data.
  • Company: Join a dynamic team focused on innovative data solutions in a hybrid work environment.
  • Benefits: Enjoy a flexible hybrid work model with opportunities for growth and collaboration.
  • Why this job: Be part of exciting projects that impact data integration and cloud technologies.
  • Qualifications: Proficient in Python, Big Data, and basic DevOps; experience with Azure is a plus.
  • Other info: Contract position with potential for extension; ideal for tech-savvy problem solvers.

The predicted salary is between 48000 - 72000 £ per year.

Job Title: ETL Developer (Python / Big Data Engineer)

Location: Hybrid (2-3 days in customer office)

Mode of Work: Hybrid work environment, with 2-3 days onsite at the customer office.

Type: Contract (Initial duration 6 months)

Job Description:

We are looking for an experienced ETL Developer with a strong background in Python development and Big Data technologies to join our team. As an ETL Developer, you will be responsible for the design, development, and implementation of data processing pipelines using Python, Spark, and other related technologies to handle large-scale data efficiently. You will also be involved in ensuring the integration of data into cloud environments such as Azure, alongside basic DevOps tasks and RDBMS fundamentals.

Responsibilities:

  • Develop and maintain ETL pipelines using Python for data extraction, transformation, and loading.
  • Utilize Apache Spark for big data processing to handle large datasets and optimize performance.
  • Work with cloud technologies, particularly Azure , to deploy and integrate data solutions.
  • Implement key Python concepts and leverage libraries/packages like Pandas, NumPy, and others for data manipulation.
  • Perform data integration tasks involving various data sources and structures.
  • Collaborate with cross-functional teams to design and implement robust, scalable data solutions.
  • Apply basic DevOps practices to manage and automate workflows within the ETL process.
  • Ensure best practices in database management and integration with RDBMS systems.
  • Participate in troubleshooting, optimization, and performance tuning of data processing systems.

Required Skills and Experience:

  • Proficient in Python with hands-on experience in key libraries (Pandas, NumPy, etc.) and a deep understanding of Python programming concepts.
  • Solid experience in Big Data Processing using Apache Spark for large-scale data handling.
  • Basic DevOps knowledge and familiarity with CI/CD pipelines for automating workflows.
  • Understanding of Azure Fundamentals and cloud data solutions.
  • Strong understanding of RDBMS database fundamentals (SQL, relational data modelling, etc.).
  • Previous experience in ETL development and data integration.
  • Senior/Lead level experience with hands-on development in relevant technologies.
  • Excellent problem-solving skills and ability to optimize data workflows.

Additional Desirable Skills:

  • Familiarity with cloud-based data storage and processing technologies in Azure.
  • Experience working in Agile or other collaborative development environments.

#ETL #contract #work #London #developer #hybrid

ETL Developer employer: Ntrinsic Consulting

Join a forward-thinking company that values innovation and collaboration, offering a hybrid work environment that allows you to balance your professional and personal life effectively. With a strong focus on employee growth, we provide opportunities for skill enhancement in cutting-edge technologies like Python and Big Data, while fostering a supportive culture that encourages teamwork and creativity. Located in London, our office is a hub of activity where you can engage with cross-functional teams and contribute to impactful data solutions.
N

Contact Detail:

Ntrinsic Consulting Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land ETL Developer

✨Tip Number 1

Make sure to showcase your hands-on experience with Python and libraries like Pandas and NumPy in your discussions. Highlight specific projects where you've implemented these technologies to solve real-world problems.

✨Tip Number 2

Familiarize yourself with Apache Spark and be ready to discuss how you've used it for big data processing. Prepare examples of how you've optimized performance in previous ETL pipelines.

✨Tip Number 3

Since the role involves working with Azure, brush up on your knowledge of cloud technologies and be prepared to explain how you've integrated data solutions in cloud environments in the past.

✨Tip Number 4

Demonstrate your understanding of basic DevOps practices and CI/CD pipelines. Be ready to discuss how you've automated workflows in your previous roles to improve efficiency.

We think you need these skills to ace ETL Developer

Proficient in Python
Experience with Pandas and NumPy
Big Data Processing using Apache Spark
Understanding of Azure Fundamentals
Basic DevOps knowledge
Familiarity with CI/CD pipelines
Strong understanding of RDBMS fundamentals
ETL development experience
Data integration skills
Problem-solving skills
Performance tuning of data processing systems
Collaboration in cross-functional teams
Agile development experience

Some tips for your application 🫡

Highlight Relevant Experience: Make sure to emphasize your experience with Python, Apache Spark, and ETL development in your application. Use specific examples from your past work that demonstrate your ability to handle large-scale data processing.

Showcase Cloud Knowledge: Since the role involves working with Azure, include any relevant experience you have with cloud technologies. Mention specific projects where you deployed or integrated data solutions in a cloud environment.

Detail Your DevOps Understanding: Include any knowledge or experience you have with basic DevOps practices and CI/CD pipelines. This will show that you can manage and automate workflows effectively within the ETL process.

Tailor Your Application: Customize your CV and cover letter to align with the job description. Use keywords from the job posting, such as 'data integration', 'RDBMS fundamentals', and 'performance tuning', to make your application stand out.

How to prepare for a job interview at Ntrinsic Consulting

✨Showcase Your Python Skills

Be prepared to discuss your experience with Python, especially focusing on libraries like Pandas and NumPy. You might be asked to solve a coding problem or explain how you've used these tools in past projects.

✨Demonstrate Big Data Knowledge

Highlight your experience with Apache Spark and how you've handled large datasets. Be ready to discuss specific challenges you faced and how you optimized performance in your ETL processes.

✨Familiarize Yourself with Azure

Since the role involves cloud technologies, brush up on your knowledge of Azure. Be prepared to talk about any projects where you've deployed data solutions in a cloud environment.

✨Understand DevOps Practices

Even if your focus is on ETL development, having a basic understanding of DevOps and CI/CD pipelines will be beneficial. Discuss any experience you have with automating workflows and managing ETL processes.

ETL Developer
Ntrinsic Consulting
N
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>