At a Glance
- Tasks: Develop and maintain large-scale data processing pipelines using Spark and Scala.
- Company: Global recruitment specialist with a focus on innovative tech solutions.
- Benefits: Competitive daily rate, hybrid working model, and opportunities for professional growth.
- Why this job: Join a dynamic team and work on cutting-edge big data technologies.
- Qualifications: 8+ years of experience in Scala and Spark, with strong coding skills.
- Other info: Fast-tracked application process with potential shortlist within 48 hours.
The predicted salary is between 57600 - 84000 £ per year.
We are a Global Recruitment specialist that provides support to the clients across EMEA, APAC, US and Canada. We have an excellent job opportunity for you.
Location: London
Working Mode: Hybrid (Weekly 3 days onsite)
Contract Type: Inside IR35
Duration: 12 months
Rate: £440 per day Inside IR35
Must have skills: Spark & Scala
Nice to have skills: Spark Streaming, Hadoop, Hive, SQL, Sqoop, Impala
Detailed Job Description:
- At least 8+ years of experience and strong knowledge in Scala programming language.
- Able to write clean, maintainable and efficient Scala code following best practices.
- Good knowledge on the fundamental Data Structures and their usage.
- At least 8+ years of experience in designing and developing large scale, distributed data processing pipelines using Apache Spark and related technologies.
- Having expertise in Spark Core, Spark SQL and Spark Streaming.
- Experience with Hadoop, HDFS, Hive and other BigData technologies.
- Familiarity with Data warehousing and ETL concepts and techniques.
- Having expertise in Database concepts and SQL/NoSQL operations.
- UNIX Shell Scripting will be an added advantage in scheduling/running application jobs.
- At least 8 years of experience in Project development life cycle activities and maintenance/support projects.
If you are interested in this position and would like to learn more, please send through your CV and we will get in touch with you as soon as possible. Please note, candidates are often shortlisted within 48 hours.
Spark-Scala Developer employer: eTeam Workforce Limited
Contact Detail:
eTeam Workforce Limited Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Spark-Scala Developer
✨Tip Number 1
Network like a pro! Reach out to your connections in the tech industry, especially those who work with Spark and Scala. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best Spark and Scala projects. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common technical questions related to Spark and Scala. Practise coding challenges and be ready to explain your thought process – it’s all about demonstrating your expertise!
✨Tip Number 4
Don’t forget to apply through our website! We make it super easy for you to find roles that match your skills. Plus, we’re always here to help you along the way, so don’t hesitate to reach out!
We think you need these skills to ace Spark-Scala Developer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Spark and Scala. We want to see how your skills match the job description, so don’t be shy about showcasing your relevant projects!
Showcase Your Projects: Include specific examples of large-scale data processing pipelines you've developed. We love seeing real-world applications of your skills, especially with Apache Spark and related technologies.
Keep It Clean and Concise: Your application should be easy to read and straight to the point. We appreciate clarity, so avoid jargon and focus on what makes you a great fit for the role.
Apply Through Our Website: We encourage you to submit your application through our website. It’s the best way for us to receive your details and get in touch quickly, often within 48 hours!
How to prepare for a job interview at eTeam Workforce Limited
✨Know Your Spark & Scala Inside Out
Make sure you brush up on your Spark and Scala skills before the interview. Be prepared to discuss your experience with writing clean, maintainable code and how you've implemented best practices in your previous projects.
✨Showcase Your Data Pipeline Experience
Be ready to talk about your experience designing and developing large-scale data processing pipelines. Highlight specific projects where you used Apache Spark and related technologies, and be prepared to explain the challenges you faced and how you overcame them.
✨Familiarise Yourself with Big Data Technologies
Since the role mentions Hadoop, Hive, and other Big Data technologies, make sure you can discuss your familiarity with these tools. Even if they’re not your primary focus, showing that you understand their relevance will impress the interviewers.
✨Prepare for Technical Questions
Expect technical questions related to data structures, SQL/NoSQL operations, and UNIX Shell Scripting. Practise explaining concepts clearly and concisely, as this will demonstrate your depth of knowledge and communication skills.