At a Glance
- Tasks: Build and manage data pipelines using Python and PySpark in a remote setting.
- Company: Join a forward-thinking tech company focused on data innovation.
- Benefits: Competitive pay, flexible remote work, and opportunities for professional growth.
- Why this job: Make an impact by working with complex data and cutting-edge technologies.
- Qualifications: Strong data manipulation skills and experience with JSON, Python, and SQL.
- Other info: Dynamic Agile environment with great potential for career advancement.
The predicted salary is between 36000 - 60000 £ per year.
Remote position for a Data Engineer with the following required skills:
- Strong understanding of data concepts - data types, data structures, schemas (both JSON and Spark), schema management etc.
- Strong understanding of complex JSON manipulation.
- Experience working with Data Pipelines using custom Python/PySpark frameworks.
- Strong understanding of the 4 core Data categories (Reference, Master, Transactional, Freeform) and the implications of each, particularly managing/handling Reference Data.
- Strong understanding of Data Security principles - data owners, access controls - row and column level, GDPR etc., including experience of handling sensitive datasets.
- Strong problem solving and analytical skills, particularly able to demonstrate these intuitively.
- Experience working in a support role would be beneficial, particularly able to demonstrate incident triage and handling skills/knowledge.
- Fundamental Linux system administration knowledge - SSH keys and config etc., Bash CLI and scripting, environment variables.
- Experience using browser-based IDEs (Jupyter Notebooks, RStudio etc.).
- Experience working in a dynamic Agile environment (SAFE, Scrum, sprints, JIRA etc.).
Languages / Frameworks:
- JSON
- YAML
- Python (as a programming language, not just able to write basic scripts; Pydantic experience would be a bonus).
- SQL
- PySpark
- Delta Lake
- Bash (both CLI usage and scripting).
- Git
- Markdown
- Scala (bonus, not compulsory).
- Azure SQL Server as a HIVE Metastore (bonus).
Technologies:
- Azure Databricks
- Apache Spark
- Delta Tables
- Data processing with Python
- PowerBI (Integration / Data Ingestion)
- JIRA
If this is the role for you, please submit your CV at your earliest convenience. If you have not had a response within 2 weeks, please take this as you have not been successful on this occasion.
Data Engineer in City of London employer: Experis - ManpowerGroup
Contact Detail:
Experis - ManpowerGroup Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in City of London
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field. Attend meetups or webinars, and don’t be shy about asking for introductions. We all know that sometimes it’s not just what you know, but who you know!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving complex JSON manipulation or data pipelines. We want to see your problem-solving skills in action, so make sure to highlight your best work!
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge. Be ready to discuss data security principles and your experience with tools like PySpark and Azure Databricks. We love candidates who can demonstrate their understanding of data concepts clearly.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen. Plus, we’re always on the lookout for passionate data engineers who are eager to join our team. Don’t miss out on the chance to land your dream job!
We think you need these skills to ace Data Engineer in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with data concepts, JSON manipulation, and Python/PySpark frameworks. We want to see how your skills match the job description, so don’t be shy about showcasing relevant projects!
Showcase Problem-Solving Skills: In your application, give examples of how you've tackled complex problems in the past. We love seeing analytical skills in action, so share specific instances where you’ve had to think on your feet!
Highlight Your Agile Experience: If you've worked in Agile environments, make sure to mention it! We value familiarity with Scrum, sprints, and tools like JIRA, so let us know how you've thrived in dynamic settings.
Apply Through Our Website: We encourage you to submit your application through our website for a smoother process. It’s the best way for us to keep track of your application and get back to you quickly!
How to prepare for a job interview at Experis - ManpowerGroup
✨Know Your Data Inside Out
Make sure you have a solid grasp of data concepts, especially around data types, structures, and schemas like JSON and Spark. Brush up on the four core data categories and be ready to discuss how you manage reference data. This knowledge will show that you're not just familiar with the terms but can apply them practically.
✨Show Off Your Problem-Solving Skills
Prepare to demonstrate your analytical skills during the interview. Think of examples where you've tackled complex problems using your knowledge of Python or PySpark. Be ready to explain your thought process and how you arrived at solutions, rather than just reciting instructions.
✨Familiarise Yourself with Agile Practices
Since this role involves working in a dynamic Agile environment, make sure you understand the principles of Scrum and sprints. Be prepared to discuss your experience with tools like JIRA and how you've contributed to team projects in an Agile setting.
✨Highlight Your Technical Skills
Be ready to talk about your experience with Linux system administration, particularly SSH keys and Bash scripting. If you've worked with browser-based IDEs like Jupyter Notebooks, mention that too! The more specific you can be about your technical skills, the better you'll stand out.