At a Glance
- Tasks: Design and build scalable data pipelines using cutting-edge Big Data technologies.
- Company: Join a leading revenue intelligence platform with a massive database of verified contacts.
- Benefits: Enjoy remote work flexibility and the chance to collaborate with top-notch teams.
- Why this job: Be part of a dynamic environment that values innovation and data accuracy.
- Qualifications: 10+ years in Data Engineering, with strong skills in Apache Spark and cloud technologies.
- Other info: Work hours are from 2:00 PM to 11:00 PM IST, reporting directly to the CEO.
The predicted salary is between 48000 - 72000 £ per year.
Location: Remote (Pan India)
Shift Timings: 2:00 PM – 11:00 PM IST
Reporting To: CEO or assigned Lead by Management.
Responsibility :
- Design and build scalable data pipelines for extraction, transformation, and loading (ETL) using the latest Big Data technologies.
- Identify and implement internal process improvements like automating manual tasks and optimizing data flows for better performance and scalability.
- Partner with Product, Data, and Engineering teams to address data-related technical issues and infrastructure needs.
- Collaborate with machine learning and analytics experts to support advanced data use cases.
- Bachelor’s degree in Engineering, Computer Science, or a relevant technical field.
- 10+ years of recent experience in Data Engineering roles.
- Minimum 5 years of hands-on experience with Apache Spark, with strong understanding of Spark internals.
- Deep knowledge of Big Data concepts and distributed systems.
- Proficiency in coding with Scala, Python, or Java, with flexibility to switch languages when required.
- Expertise in SQL, and hands-on experience with PostgreSQL, MySQL, or similar relational databases.
- Strong cloud experience with Databricks, including Delta Lake.
- Experience working with data formats like Delta Tables, Parquet, CSV, JSON.
- Comfortable working in Linux environments and scripting.
- Comfortable working in an Agile environment.
- Machine Learning knowledge is a plus.
- Must be capable of working independently and delivering stable, efficient and reliable software.
- Experience supporting and working with cross-functional teams in a dynamic environment.
#J-18808-Ljbffr
Senior Data Engineer IND (Remote) employer: RemoteStar
Contact Detail:
RemoteStar Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer IND (Remote)
✨Tip Number 1
Network with professionals in the data engineering field, especially those who have experience with Apache Spark and Big Data technologies. Attend relevant webinars or meetups to connect with potential colleagues and learn more about the industry.
✨Tip Number 2
Showcase your hands-on experience with cloud platforms like Databricks in conversations or during interviews. Be prepared to discuss specific projects where you implemented scalable data pipelines or optimised data flows.
✨Tip Number 3
Familiarise yourself with the company's products and services. Understanding their revenue intelligence platform and how they ensure data accuracy will help you tailor your discussions and demonstrate your genuine interest in the role.
✨Tip Number 4
Prepare to discuss your experience working in Agile environments and collaborating with cross-functional teams. Highlight specific examples of how you've contributed to team success and addressed technical challenges in previous roles.
We think you need these skills to ace Senior Data Engineer IND (Remote)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your relevant experience in Data Engineering, especially your hands-on work with Apache Spark and cloud technologies like Databricks. Use specific examples to demonstrate your skills in building scalable data pipelines and optimising data flows.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention how your 10+ years of experience aligns with their needs, and provide insights into how you can contribute to their mission of delivering high data accuracy.
Showcase Technical Skills: Clearly list your technical skills, particularly your proficiency in Scala, Python, SQL, and your experience with relational databases. Highlight any projects where you've successfully implemented Big Data concepts or collaborated with cross-functional teams.
Proofread and Edit: Before submitting your application, take the time to proofread your documents. Check for any grammatical errors or typos, and ensure that all information is accurate and well-presented. A polished application reflects your attention to detail.
How to prepare for a job interview at RemoteStar
✨Showcase Your Technical Skills
Be prepared to discuss your hands-on experience with Apache Spark and other Big Data technologies. Highlight specific projects where you've designed and built scalable data pipelines, as this will demonstrate your capability to meet the job's technical requirements.
✨Understand the Company’s Data Needs
Research the company’s revenue intelligence platform and its approach to data accuracy. Understanding their processes and challenges will allow you to tailor your responses and show how your skills can directly benefit their operations.
✨Prepare for Scenario-Based Questions
Expect questions that assess your problem-solving abilities in real-world scenarios. Be ready to explain how you would identify and implement process improvements or address data-related technical issues, showcasing your analytical thinking.
✨Emphasise Collaboration Skills
Since the role involves partnering with various teams, highlight your experience working in cross-functional environments. Share examples of how you've successfully collaborated with product, data, and engineering teams to achieve common goals.