At a Glance
- Tasks: Build and optimise data pipelines using Prophecy for large-scale data workloads.
- Company: Join a cutting-edge tech company focused on innovative data solutions.
- Benefits: Enjoy flexible work options, competitive pay, and opportunities for professional growth.
- Why this job: Be part of a dynamic team shaping the future of data engineering with impactful projects.
- Qualifications: 2+ years with Prophecy and 5+ years in data engineering required.
- Other info: Experience with modern data lakehouse concepts is a plus.
The predicted salary is between 48000 - 72000 £ per year.
Key Responsibilities:
- Build and optimize Prophecy data pipelines for large scale batch and streaming data workloads using Pyspark.
- Define end-to-end data architecture leveraging Prophecy integrated with Databricks or Spark or other cloud-native compute engines.
- Establish coding standards, reusable components, and naming conventions using Prophecy's visual designer and metadata-driven approach.
- Implement scalable and efficient data models (e.g. star schema, SCD Type 2) for data marts and analytics layer.
- Integrate Prophecy pipelines with orchestration tools like Airflow and data catalog tools for lineage.
- Implement version control, automated testing and deployment pipelines using Git and CI/CD (e.g. GitHub and Jenkins).
- Monitor and tune performance of Spark jobs, optimize data partitions and caching strategies.
- Have experience and exposure to convert legacy ETL tools like Datastage and Informatica into Prophecy pipelines using the Transpiler component of Prophecy.
Required skill & experience:
- 2+ years of hands-on experience with Prophecy (using Pyspark).
- 5+ years of experience in data engineering with tools such as Spark, Databricks, Scala/Pyspark or SQL.
- Strong understanding of ETL/ELT pipelines, distributed data processing and data lake architecture. Exposure to ETL tools such as Informatica, Datastage or Talend is an added advantage.
- Experience with Unity Catalog, Delta Lake and modern data lakehouse concepts.
- Strong communication and stakeholder management skills.
ETL Developer (Prophecy ) employer: PRACYVA
Contact Detail:
PRACYVA Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land ETL Developer (Prophecy )
✨Tip Number 1
Familiarise yourself with Prophecy and its features, especially the visual designer and metadata-driven approach. Understanding how to build and optimise data pipelines using Pyspark will give you a significant edge during interviews.
✨Tip Number 2
Brush up on your knowledge of ETL/ELT processes and distributed data processing. Being able to discuss your experience with tools like Spark and Databricks in detail will demonstrate your expertise and confidence in the field.
✨Tip Number 3
Showcase your understanding of modern data lakehouse concepts, such as Delta Lake and Unity Catalog. This knowledge is increasingly important and can set you apart from other candidates.
✨Tip Number 4
Prepare to discuss your experience with version control and CI/CD practices, particularly with Git and Jenkins. Highlighting your ability to implement automated testing and deployment pipelines will be crucial for this role.
We think you need these skills to ace ETL Developer (Prophecy )
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Prophecy, Pyspark, and other relevant tools mentioned in the job description. Use specific examples to demonstrate your skills in building data pipelines and optimising performance.
Craft a Compelling Cover Letter: In your cover letter, explain why you are interested in the ETL Developer position and how your background aligns with the responsibilities outlined. Mention your experience with data architecture and any relevant projects you've worked on.
Showcase Relevant Projects: If you have worked on projects involving data engineering, especially using Prophecy or similar tools, be sure to include these in your application. Describe your role, the technologies used, and the outcomes achieved.
Highlight Soft Skills: Since strong communication and stakeholder management skills are required, provide examples of how you've successfully collaborated with teams or managed stakeholders in previous roles. This will help demonstrate your fit for the company culture.
How to prepare for a job interview at PRACYVA
✨Showcase Your Technical Skills
Be prepared to discuss your hands-on experience with Prophecy and Pyspark. Highlight specific projects where you've built or optimised data pipelines, and be ready to explain the architecture and design choices you made.
✨Understand Data Architecture
Familiarise yourself with end-to-end data architecture concepts, especially in relation to cloud-native compute engines like Databricks. Be ready to discuss how you would define and implement scalable data models such as star schemas.
✨Demonstrate Problem-Solving Skills
Prepare to tackle hypothetical scenarios related to performance tuning of Spark jobs or integrating Prophecy pipelines with orchestration tools like Airflow. Show your thought process and how you approach problem-solving in data engineering.
✨Communicate Effectively
Strong communication skills are essential for this role. Practice articulating your thoughts clearly and concisely, especially when discussing technical concepts. Be ready to engage with stakeholders and explain complex ideas in an understandable way.