At a Glance
- Tasks: Design and build scalable data pipelines using the latest Big Data technologies.
- Company: Leading revenue intelligence platform with a focus on data accuracy.
- Benefits: Remote work, competitive salary, and opportunities for professional growth.
- Other info: Collaborative environment with a focus on innovation and advanced data use cases.
- Why this job: Join a dynamic team and make an impact in the data engineering field.
- Qualifications: 10+ years in Data Engineering, strong skills in Apache Spark, Scala, and SQL.
The predicted salary is between 80000 - 100000 £ per year.
Our Client is a leading revenue intelligence platform, combining automation and human research to deliver 95% data accuracy across their published contact data. With a growing database of 5 million+ human-verified contacts and over 70 million machine-processed contacts, they offer one of the largest collections of direct dial contacts in the industry. Their dedicated research team re-verifies contacts every 90 days, ensuring exceptional data accuracy and quality.
Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Reporting To: CEO or assigned Lead by Management.
Responsibility:
- Design and build scalable data pipelines for extraction, transformation, and loading (ETL) using the latest Big Data technologies.
- Identify and implement internal process improvements like automating manual tasks and optimizing data flows for better performance and scalability.
- Partner with Product, Data, and Engineering teams to address data-related technical issues and infrastructure needs.
- Collaborate with machine learning and analytics experts to support advanced data use cases.
Key Requirements:
- Bachelor’s degree in Engineering, Computer Science, or a relevant technical field.
- 10+ years of recent experience in Data Engineering roles.
- Minimum 5 years of hands-on experience with Apache Spark, with strong understanding of Spark internals.
- Deep knowledge of Big Data concepts and distributed systems.
- Proficiency in coding with Scala, Python, or Java, with flexibility to switch languages when required.
- Expertise in SQL, and hands-on experience with PostgreSQL, MySQL, or similar relational databases.
- Strong cloud experience with Databricks, including Delta Lake.
- Experience working with data formats like Delta Tables, Parquet, CSV, JSON.
- Comfortable working in Linux environments and scripting.
- Comfortable working in an Agile environment.
- Machine Learning knowledge is a plus.
- Must be capable of working independently and delivering stable, efficient and reliable software.
- Experience supporting and working with cross-functional teams in a dynamic environment.
Senior Data Engineer IND (Remote) in England employer: RemoteStar
Contact Detail:
RemoteStar Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer IND (Remote) in England
✨Tip Number 1
Network like a pro! Reach out to your connections in the industry, attend virtual meetups, and engage with professionals on platforms like LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your data engineering projects. This gives potential employers a tangible look at what you can do, especially with tools like Apache Spark and cloud technologies.
✨Tip Number 3
Prepare for those interviews! Brush up on your technical knowledge and be ready to discuss your experience with ETL processes, SQL, and Big Data concepts. Practise common interview questions and consider mock interviews to boost your confidence.
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities waiting for talented folks like you. Plus, it’s a great way to ensure your application gets seen by the right people.
We think you need these skills to ace Senior Data Engineer IND (Remote) in England
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Data Engineer role. Highlight your experience with Apache Spark, Big Data concepts, and any relevant projects that showcase your skills in data engineering. We want to see how you fit into our world!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your background aligns with our client's needs. Don’t forget to mention your experience with cloud technologies and collaboration with cross-functional teams.
Showcase Your Technical Skills: Be sure to list your technical skills clearly, especially your proficiency in Scala, Python, or Java. Mention any hands-on experience with SQL databases and cloud platforms like Databricks. We love seeing those skills front and centre!
Apply Through Our Website: We encourage you to apply through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any updates. Plus, it’s super easy!
How to prepare for a job interview at RemoteStar
✨Know Your Tech Inside Out
Make sure you brush up on your knowledge of Apache Spark and Big Data concepts. Be ready to discuss your hands-on experience with these technologies, as well as any challenges you've faced and how you overcame them.
✨Showcase Your Problem-Solving Skills
Prepare examples of how you've identified and implemented process improvements in your previous roles. Highlight specific instances where you automated tasks or optimised data flows, as this will demonstrate your ability to enhance performance and scalability.
✨Collaborate Like a Pro
Since the role involves partnering with various teams, think of examples where you've successfully collaborated with product, data, or engineering teams. Be ready to discuss how you addressed technical issues and supported advanced data use cases.
✨Be Agile and Adaptable
Familiarise yourself with Agile methodologies and be prepared to discuss your experience working in such environments. Emphasise your flexibility in coding languages and your comfort in switching between Scala, Python, or Java as needed.