At a Glance
- Tasks: Lead the design and development of robust data pipelines using Databricks.
- Company: Join a global tech transformation specialist with a focus on AI solutions.
- Benefits: Immediate start, competitive pay, and a chance to work with cutting-edge technology.
- Why this job: Make an impact by collaborating on innovative data projects in a diverse environment.
- Qualifications: Experience in Databricks, Python, SQL, and modern DataOps practices required.
- Other info: We celebrate diversity and encourage applications from all backgrounds.
The predicted salary is between 36000 - 60000 £ per year.
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions. With over 8,000 CI&Ters around the world, we've built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
As a Senior Data Engineer, you will lead the design and development of robust data pipelines, integrating and transforming data from diverse data sources such as APIs, relational databases, and files. Collaborating closely with business and analytics teams, you will ensure high-quality deliverables that meet the strategic needs of our organization. Your expertise will be pivotal in maintaining the quality, reliability, security and governance of the ingested data, therefore driving our mission of Collaboration, Innovation, & Transformation.
Key Responsibilities:- Develop and maintain data pipelines.
- Integrate data from various sources (APIs, relational databases, files, etc.).
- Collaborate with business and analytics teams to understand data requirements.
- Ensure quality, reliability, security and governance of the ingested data.
- Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CD.
- Document processes and best practices in data engineering.
- Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).
- Strong programming skills in Python and SQL for data processing and transformation.
- Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.
- Hands‐on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.
- Strong problem‐solving skills with the ability to troubleshoot performance, scalability, and reliability issues.
- Proficiency in Git.
Collaboration is our superpower, diversity unites us, and excellence is our standard. We value diverse identities and life experiences, fostering a diverse, inclusive, and safe work environment. We encourage applications from diverse and underrepresented groups to our job positions.
DataBricks Specialist - Short term contract in London employer: Ciandt
Contact Detail:
Ciandt Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land DataBricks Specialist - Short term contract in London
✨Tip Number 1
Network like a pro! Reach out to your connections in the industry, attend meetups, and engage with professionals on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines and projects you've worked on. This is your chance to demonstrate your expertise in Databricks and Python, making you stand out from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your thought process when tackling data challenges, as collaboration and problem-solving are key in this role.
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, we love seeing candidates who take the initiative to connect directly with us.
We think you need these skills to ace DataBricks Specialist - Short term contract in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the job description. Highlight your experience with Databricks, Python, and SQL, as these are key for us.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data engineering and how your background aligns with our mission of Collaboration, Innovation, & Transformation. Be genuine!
Showcase Your Projects: If you've worked on relevant projects, don’t hesitate to mention them! We love seeing real examples of your work, especially those involving data pipelines and DataOps practices.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity!
How to prepare for a job interview at Ciandt
✨Know Your Data Inside Out
Make sure you brush up on your knowledge of data pipelines, especially in Databricks. Be ready to discuss your experience with PySpark, Delta Lake, and SQL. Prepare examples of how you've integrated data from various sources and the challenges you faced.
✨Showcase Your Collaboration Skills
Since collaboration is key for this role, think of specific instances where you've worked closely with business or analytics teams. Highlight how you understood their data requirements and delivered high-quality results that met their needs.
✨Demonstrate Your Problem-Solving Prowess
Be prepared to tackle hypothetical scenarios during the interview. Think about common performance, scalability, and reliability issues you've encountered in the past and how you resolved them. This will show your analytical skills and hands-on experience.
✨Familiarise Yourself with Modern DataOps Practices
Brush up on your knowledge of version control, CI/CD pipelines, and automated testing. Be ready to discuss how you've implemented these practices in your previous roles, as they are crucial for maintaining quality and governance in data engineering.