At a Glance
- Tasks: Design and build data processing pipelines using AWS services.
- Company: Be-IT is a dynamic tech company based in Edinburgh.
- Benefits: Competitive pay, flexible work options, and a chance to work with cutting-edge technology.
- Why this job: Join a forward-thinking team and enhance your skills in a rapidly evolving field.
- Qualifications: Experience with Pyspark, Python, SQL, and AWS services is essential.
- Other info: Initial 6-month contract with potential for extension.
Edinburgh based, or with the ability to commute as required Initial 6 month contract £400 – 450 per day with some flexibility Outside IR35 Start date – mid to late January Be-IT are looking for 2 data engineers to design and build a suite of data processing pipelines using native AWS services. You must have demonstrable experience of building complex systems to ingest and process large data volumes of data using native AWS services. You should have significant experience using Pyspark, Python, SQL, AWS Services such as Glue, Step Functions and Redshift along with Cloudformation/Terraform and CI/CD technologies. Exposure to Logging, Monitoring and Data Observability tooling would also be an advantage. Apply online via the Be-IT website for more information. …
Data Engineer employer: Be-IT Resourcing
Contact Detail:
Be-IT Resourcing Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Make sure to showcase your experience with AWS services prominently. Highlight specific projects where you've used Glue, Step Functions, and Redshift to demonstrate your hands-on skills.
✨Tip Number 2
Familiarize yourself with the latest trends in data engineering, especially around Pyspark and Python. Being able to discuss recent advancements or challenges in these areas can set you apart during discussions.
✨Tip Number 3
Prepare to discuss your experience with CI/CD technologies. Be ready to explain how you've implemented these practices in past projects to improve deployment efficiency and reliability.
✨Tip Number 4
If you have experience with logging, monitoring, and data observability tools, be sure to mention it. This knowledge is increasingly important in data engineering roles and can give you an edge over other candidates.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Highlight Relevant Experience: Make sure to emphasize your experience with AWS services, particularly Glue, Step Functions, and Redshift. Provide specific examples of projects where you built data processing pipelines or worked with large data volumes.
Showcase Technical Skills: Clearly list your technical skills in Pyspark, Python, SQL, and any CI/CD technologies you are familiar with. Use bullet points for clarity and ensure that your proficiency level is evident.
Tailor Your Application: Customize your CV and cover letter to align with the job description. Mention how your background and skills make you a perfect fit for the Data Engineer role at Be-IT.
Proofread Your Documents: Before submitting your application, carefully proofread your CV and cover letter for any grammatical errors or typos. A polished application reflects your attention to detail and professionalism.
How to prepare for a job interview at Be-IT Resourcing
✨Showcase Your AWS Expertise
Make sure to highlight your experience with AWS services like Glue, Step Functions, and Redshift. Be prepared to discuss specific projects where you utilized these tools to build data processing pipelines.
✨Demonstrate Your Coding Skills
Since Pyspark, Python, and SQL are crucial for this role, be ready to solve coding challenges or discuss your previous work involving these languages. Practice explaining your thought process clearly.
✨Discuss Data Volume Handling
Prepare examples of how you've managed large volumes of data in past projects. Discuss the challenges you faced and how you overcame them using AWS services.
✨Familiarize Yourself with CI/CD Practices
Understand the CI/CD technologies relevant to the role. Be ready to explain how you've implemented these practices in your previous work to ensure smooth deployment and integration.