At a Glance
- Tasks: Enhance and maintain a cloud-based data warehouse while building data pipelines.
- Company: Join a dynamic media company with a focus on innovation.
- Benefits: Fully remote work, competitive daily rate, and flexible hours.
- Other info: Exciting opportunity to work with cutting-edge technologies in a collaborative environment.
- Why this job: Make an impact by scaling data infrastructure and improving pipeline reliability.
- Qualifications: 5+ years of data engineering experience and strong GCP skills.
The predicted salary is between 60000 - 75000 € per year.
YunoJuno has partnered with a Media Company who are looking to hire a freelance Data Engineer for an upcoming 6 month contract. We're looking for an experienced freelance Data Engineer to join our team on a 6-month contract. You'll be working directly on enhancing and maintaining the cloud-based data warehouse, helping us scale our data infrastructure, improve pipeline reliability, and expand our data collection capabilities.
Responsibilities
- Enhance and maintain the existing GCP-based data warehouse, including schema design, performance tuning, and cost optimization
- Build and manage data pipelines using Apache Airflow and Python
- Integrate new data sources, including social media APIs and web crawling pipelines
- Collaborate with engineering and product teams to support data needs across the organization
- Ensure data quality, reliability, and documentation across all pipelines
- Support event-level data collection and tracking infrastructure
Requirements
- 5+ years of data engineering experience
- Strong hands-on experience with Google Cloud Platform (GCP), specifically BigQuery
- Proficiency with Apache Airflow for pipeline orchestration
- Strong Python development skills for ETL/ELT pipeline development
- Experience with data modeling, warehousing best practices, and query optimization
Nice to Have
- Experience with web crawling and scraping at scale
- Experience integrating social media APIs (e.g., Twitter/X, LinkedIn, Meta) for data pipeline creation
- Familiarity with event-level data collection platforms such as Rudderstack or Segment
- JavaScript, Node.js/Express, and React development experience
Start date: ASAP
Duration: 6 month freelance contract
Rate: £250 per day
Location: Fully remote
Data Engineer in Reading employer: YunoJuno
YunoJuno is an exceptional employer for freelance Data Engineers, offering the flexibility of a fully remote role that allows you to work from anywhere. With a strong focus on collaboration and innovation, you'll have the opportunity to enhance your skills while contributing to exciting projects in a dynamic media environment. The company values professional growth and provides a supportive culture that encourages creativity and technical excellence.
StudySmarter Expert Advice🤫
We think this is how you could land Data Engineer in Reading
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field and let them know you're on the lookout for opportunities. You never know who might have a lead or can refer you to someone looking for a Data Engineer.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving GCP, Apache Airflow, and Python. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Stay active on platforms like LinkedIn and GitHub. Share your insights on data engineering trends, contribute to discussions, and even post about your latest projects. This visibility can attract recruiters to you!
✨Tip Number 4
Don't forget to apply through our website! We’ve got loads of freelance opportunities that could be perfect for you. Plus, applying directly helps us get your application in front of the right people faster.
We think you need these skills to ace Data Engineer in Reading
Some tips for your application 🫡
Tailor Your CV:Make sure your CV highlights your experience with GCP, Apache Airflow, and Python. We want to see how your skills match the job description, so don’t be shy about showcasing relevant projects!
Craft a Compelling Cover Letter:Your cover letter is your chance to shine! Use it to explain why you’re the perfect fit for this Data Engineer role. Share specific examples of how you've enhanced data infrastructures or improved pipeline reliability in the past.
Showcase Your Projects:If you’ve worked on any cool data projects, make sure to mention them! Whether it’s integrating social media APIs or building data pipelines, we love seeing real-world applications of your skills.
Apply Through Our Website:We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates from our team!
How to prepare for a job interview at YunoJuno
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, especially GCP and Apache Airflow. Brush up on your Python skills too, as you'll likely be asked to demonstrate your proficiency during the interview.
✨Showcase Your Projects
Prepare to discuss specific projects where you've enhanced data warehouses or built data pipelines. Be ready to explain your role, the challenges you faced, and how you overcame them. This will show your practical experience and problem-solving skills.
✨Understand the Company’s Data Needs
Research the media company and understand their data requirements. Think about how your skills can help them scale their data infrastructure and improve pipeline reliability. This knowledge will help you tailor your answers and show that you're genuinely interested.
✨Ask Insightful Questions
Prepare a few thoughtful questions about the team dynamics, the current data challenges they face, or their future data strategy. This not only shows your interest but also helps you gauge if the company is the right fit for you.