At a Glance
- Tasks: Design and build data pipelines while collaborating with a dynamic team.
- Company: Forward-thinking company with a focus on innovation and collaboration.
- Benefits: Hybrid work, flexible locations, and opportunities for continuous improvement.
- Other info: Mentorship opportunities and a supportive DevOps environment.
- Why this job: Make an impact in data-driven applications and enhance your technical skills.
- Qualifications: Strong Python experience, knowledge of SQL, and familiarity with AWS services.
The predicted salary is between 45000 - 55000 £ per year.
Requirements
- Strong experience in Python and data processing with Apache Spark
- Knowledge of SQL and familiarity with PySpark
- Experience using Apache Airflow for task orchestration
- Understanding of EMR and ability to review output logs
- Proficiency in using Jupyter notebooks and/or Amazon Athena for data querying
- Skills in data analysis to identify root causes of issues
- Understanding of dimensional data models and historic data capture
- Familiarity with AWS console and services (CloudWatch, IAM, S3, Glue, EC2, etc.)
- Knowledge of Docker and solutions containerization
- Experience with Infrastructure as Code (IaC) using Terraform
- Understanding of both server-side and client-side encryption
- Code management experience using GitLab for CI/CD
- Active BPSS or SC clearance, or eligibility for clearance
Responsibilities
- Design, build, and operate data ingest and publishing pipelines
- Implement workflow orchestration and task scheduling using managed services
- Collaborate with Product Owners, Business Analysts, and users to shape technical solutions
- Provide production support, monitoring, and enhance system resilience, stability, and performance
- Conduct data analysis to identify root causes of defects and operational issues
- Work closely with DevOps to support automated deployments and infrastructure management
- Coach and mentor junior engineers and promote engineering best practices
We are a forward-thinking company offering a hybrid work arrangement with flexible locations, including London, Leeds, Newcastle, and more. Our team focuses on innovative development in data-driven applications while maintaining a collaborative DevOps environment. We value continuous improvement, technical leadership, and the mentoring of junior colleagues. Come join us in providing robust technical solutions while contributing to the success of our projects.
Software Developer SC / Newcastle employer: Peregrine
Contact Detail:
Peregrine Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Software Developer SC / Newcastle
✨Tip Number 1
Network like a pro! Reach out to current employees on LinkedIn or at industry events. A friendly chat can give you insider info and maybe even a referral!
✨Tip Number 2
Show off your skills! Create a GitHub repository with projects that highlight your Python, Spark, and AWS expertise. This gives us a chance to see your work in action!
✨Tip Number 3
Prepare for the interview by brushing up on your technical knowledge. Be ready to discuss your experience with tools like Apache Airflow and Terraform. We love seeing candidates who can talk the talk!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team!
We think you need these skills to ace Software Developer SC / Newcastle
Some tips for your application 🫡
Show Off Your Skills: Make sure to highlight your strong experience in Python and data processing with Apache Spark. We want to see how you’ve used these skills in real-world scenarios, so don’t hold back on the details!
Tailor Your Application: Take a moment to customise your application for this role. Mention your familiarity with SQL, PySpark, and any experience with Apache Airflow. This shows us you’re genuinely interested and have done your homework.
Be Clear and Concise: When writing your application, keep it clear and to the point. We appreciate straightforward communication, so avoid jargon unless it’s relevant to the role. Make it easy for us to see why you’re a great fit!
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it gives you a chance to explore more about what we do!
How to prepare for a job interview at Peregrine
✨Know Your Tech Inside Out
Make sure you brush up on your Python and Apache Spark skills. Be ready to discuss specific projects where you've used these technologies, especially in data processing. The more examples you can provide, the better!
✨Familiarise Yourself with the Tools
Get comfortable with SQL, PySpark, and Apache Airflow before the interview. If you can, practice using Jupyter notebooks and Amazon Athena for data querying. Showing that you can navigate these tools will impress the interviewers.
✨Understand the Company’s Needs
Research the company’s focus on data-driven applications and their collaborative DevOps environment. Think about how your experience aligns with their goals, especially in terms of enhancing system resilience and performance.
✨Prepare for Scenario Questions
Expect questions about real-world scenarios, like troubleshooting data issues or implementing workflow orchestration. Prepare to explain your thought process and how you would approach these challenges, showcasing your problem-solving skills.