At a Glance
- Tasks: Design and build scalable data pipelines using cutting-edge Big Data technologies.
- Company: Join a leading revenue intelligence platform with a massive database of human-verified contacts.
- Benefits: Enjoy remote work flexibility and the chance to collaborate with top-notch teams.
- Why this job: Be part of a dynamic environment that values innovation and data accuracy.
- Qualifications: Requires a Bachelor's degree and 10+ years in Data Engineering, with strong skills in Apache Spark.
- Other info: Work hours are from 2:00 PM to 11:00 PM IST, reporting directly to the CEO.
The predicted salary is between 48000 - 84000 £ per year.
Our Client : is a leading revenue intelligence platform, combining automation and human research to deliver 95% data accuracy across their published contact data. With a growing database of 5 million+ human-verified contacts and over 70 million machine-processed contacts, they offer one of the largest collections of direct dial contacts in the industry. Their dedicated research team re-verifies contacts every 90 days, ensuring exceptional data accuracy and quality.
Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Reporting To: CEO or assigned Lead by Management.
Responsibility :
- Design and build scalable data pipelines for extraction, transformation, and loading (ETL) using the latest Big Data technologies.
- Identify and implement internal process improvements like automating manual tasks and optimizing data flows for better performance and scalability.
- Partner with Product, Data, and Engineering teams to address data-related technical issues and infrastructure needs.
- Collaborate with machine learning and analytics experts to support advanced data use cases.
Key Requirements :
- Bachelor\’s degree in Engineering, Computer Science, or a relevant technical field.
- 10+ years of recent experience in Data Engineering roles.
- Minimum 5 years of hands-on experience with Apache Spark, with strong understanding of Spark internals.
- Deep knowledge of Big Data concepts and distributed systems.
- Proficiency in coding with Scala, Python, or Java, with flexibility to switch languages when required.
- Expertise in SQL, and hands-on experience with PostgreSQL, MySQL, or similar relational databases.
- Strong cloud experience with Databricks, including Delta Lake.
- Experience working with data formats like Delta Tables, Parquet, CSV, JSON.
- Comfortable working in Linux environments and scripting.
- Comfortable working in an Agile environment.
- Machine Learning knowledge is a plus.
- Must be capable of working independently and delivering stable, efficient and reliable software.
- Experience supporting and working with cross-functional teams in a dynamic environment.
#J-18808-Ljbffr
Senior Data Engineer IND (Remote) employer: RemoteStar
Contact Detail:
RemoteStar Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer IND (Remote)
✨Tip Number 1
Familiarise yourself with the latest Big Data technologies and tools, especially Apache Spark. Since the role requires a strong understanding of Spark internals, consider brushing up on your knowledge through online courses or tutorials to demonstrate your expertise.
✨Tip Number 2
Network with professionals in the data engineering field, particularly those who have experience with revenue intelligence platforms. Engaging in relevant online communities or attending webinars can help you gain insights and potentially get referrals.
✨Tip Number 3
Showcase your experience with cloud technologies, particularly Databricks and Delta Lake. If you have worked on projects involving these tools, be prepared to discuss them in detail during interviews to highlight your practical knowledge.
✨Tip Number 4
Prepare to discuss your experience in optimising data flows and automating processes. Think of specific examples where you've improved performance or scalability in previous roles, as this will align well with the responsibilities of the position.
We think you need these skills to ace Senior Data Engineer IND (Remote)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience in Data Engineering, especially your hands-on work with Apache Spark and cloud technologies like Databricks. Use specific examples to demonstrate your skills in building scalable data pipelines and optimising data flows.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention how your 10+ years of experience aligns with their needs, and provide insights into your problem-solving abilities and collaboration with cross-functional teams.
Showcase Relevant Projects: If you have worked on significant projects involving Big Data technologies or machine learning, summarise these in your application. Highlight your contributions and the impact of your work on data accuracy and performance.
Proofread Your Application: Before submitting, carefully proofread your application for any spelling or grammatical errors. A polished application reflects your attention to detail, which is crucial in a data engineering role.
How to prepare for a job interview at RemoteStar
✨Showcase Your Technical Skills
Be prepared to discuss your experience with Apache Spark and other Big Data technologies in detail. Highlight specific projects where you've designed and built data pipelines, and be ready to explain the challenges you faced and how you overcame them.
✨Demonstrate Problem-Solving Abilities
Expect questions that assess your ability to identify and implement process improvements. Think of examples where you've automated tasks or optimised data flows, and be ready to share the impact of these changes on performance and scalability.
✨Collaborate Effectively
Since the role involves partnering with various teams, prepare to discuss your experience working cross-functionally. Share examples of how you've collaborated with product, data, and engineering teams to resolve technical issues or support advanced data use cases.
✨Familiarise Yourself with the Company
Research the company’s revenue intelligence platform and understand their approach to data accuracy. Being knowledgeable about their services and how they leverage data will show your genuine interest in the role and help you align your skills with their needs.