At a Glance
- Tasks: Design and build scalable data pipelines for exciting cloud migration projects.
- Company: Leading consultancy known for innovation and collaboration.
- Benefits: Competitive daily rate, flexible work environment, and opportunity to enhance your skills.
- Why this job: Join a dynamic team and make a real impact in the data engineering field.
- Qualifications: Strong SQL and Snowflake ELT experience; Python/PySpark skills are a plus.
- Other info: Initial 3-6 month contract with potential for growth and learning.
Location: London (2 days on-site per week)
Salary/Rate: £550 - £600 per day inside IR35
Start Date: March
Job Type: Initial 3-6 month contract
Company Introduction
We have an exciting opportunity now available with one of our sector-leading consultancy clients! They are currently looking for a skilled Snowflake Data Engineer to help on their cloud migration project.
Job Responsibilities/Objectives
- You will be responsible for designing and building scalable data pipelines, Data Vault models/Dimension Model, and Snowflake/dbt workloads for cloud migration projects.
- Implement Data Vault 2.0 (Hubs, Links, Satellites) /Dimension Model on Snowflake.
- Build ELT pipelines using Snowflake, dbt, Python/PySpark.
- Develop ingestion from APIs, databases, streams.
- Optimize Snowflake warehouses, cost, and performance.
- Collaborate with architects, analysts, and DevOps.
- Maintain documentation, lineage, governance standards.
Required Skills/Experience
- Strong SQL; Snowflake ELT; dbt experience.
- Python/PySpark, ETL/ELT design.
- Data Vault 2.0 or dimensional modeling.
- AWS services (S3, Glue, Lambda, Redshift) or GCP equivalents.
- Experience with CI/CD for data pipelines.
Good to have skills
- Although not essential, the following skills are desired by the client:
- Kafka/Kinesis, Airflow, CodePipeline.
- BI tools (Power BI/Tableau).
- Docker/OpenShift; metadata driven pipelines.
- 3-8+ years Data Engineering experience.
- Cloud data engineering and Snowflake/dbt hands on exposure.
If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format.
Disclaimer
Notwithstanding any guidelines given to level of experience sought, we will consider candidates from outside this range if they can demonstrate the necessary competencies.
Square One is acting as both an employment agency and an employment business, and is an equal opportunities recruitment business. Square One embraces diversity and will treat everyone equally. Please see our website for our full diversity statement.
Snowflake Data Engineer in City of London employer: Square One Resources
Contact Detail:
Square One Resources Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Snowflake Data Engineer in City of London
✨Network Like a Pro
Get out there and connect with folks in the industry! Attend meetups, webinars, or even just grab a coffee with someone who’s already in the game. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Show Off Your Skills
When you land that interview, don’t just talk about your experience—show it! Bring along a portfolio or examples of your work, especially any cool Snowflake projects you've tackled. This will help you stand out and prove you’ve got what it takes.
✨Tailor Your Approach
Make sure to tailor your pitch to each company you’re applying to. Research their projects and challenges, and be ready to discuss how your skills in Snowflake and data engineering can help them succeed. It shows you’re genuinely interested and not just sending out generic applications.
✨Apply Through Our Website
Don’t forget to apply through our website! We’ve got loads of opportunities waiting for talented Snowflake Data Engineers like you. Plus, applying directly can sometimes give you a better chance of getting noticed by hiring managers.
We think you need these skills to ace Snowflake Data Engineer in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Snowflake, dbt, and any relevant cloud migration projects. We want to see how your skills match the job description, so don’t be shy about showcasing your best bits!
Showcase Your Projects: If you've worked on any cool data pipelines or Data Vault models, let us know! Include specific examples in your application that demonstrate your hands-on experience with the technologies mentioned in the job description.
Keep It Clear and Concise: We appreciate a well-structured application. Use bullet points for your skills and experiences, and keep your language straightforward. This helps us quickly see why you’d be a great fit for the role!
Apply Through Our Website: Don’t forget to submit your application through our website! It’s the easiest way for us to track your application and get back to you. Plus, it shows you’re serious about joining our team!
How to prepare for a job interview at Square One Resources
✨Know Your Snowflake Inside Out
Make sure you brush up on your Snowflake knowledge before the interview. Be ready to discuss your experience with Snowflake ELT, dbt, and how you've implemented Data Vault 2.0 in past projects. The more specific examples you can provide, the better!
✨Show Off Your Coding Skills
Since Python/PySpark is a key requirement, be prepared to talk about your coding experience. You might even be asked to solve a coding problem on the spot, so practice writing clean, efficient code that demonstrates your understanding of ETL/ELT design.
✨Collaborate Like a Pro
Collaboration is crucial in this role, so think of examples where you've worked with architects, analysts, or DevOps teams. Highlight how you maintained documentation and adhered to governance standards, as this shows you're not just a tech whiz but also a team player.
✨Stay Current with Cloud Technologies
Familiarise yourself with AWS services like S3, Glue, and Lambda, or their GCP equivalents. Being able to discuss how these tools integrate with Snowflake will set you apart from other candidates. If you have experience with CI/CD for data pipelines, make sure to mention it!