At a Glance
- Tasks: Design and build data processing pipelines using cutting-edge tech to handle terabytes of data.
- Company: Vortexa, a fast-growing tech company revolutionising the energy industry with AI.
- Benefits: Equity options, flexible working, and a vibrant, diverse team culture.
- Other info: Engage with top minds in a supportive environment that encourages growth and learning.
- Why this job: Join a dynamic startup and make a real impact in the energy sector with innovative solutions.
- Qualifications: Experience in AWS, K8s, Python, and Java; passion for coaching and collaboration.
The predicted salary is between 36000 - 60000 ÂŁ per year.
About Us: Vortexa is a fast-growing international technology business founded to solve the immense information gap that exists in the energy industry. By using massive amounts of new satellite data and pioneering work in artificial intelligence, Vortexa creates an unprecedented view on the global seaborne energy flows in real-time, bringing transparency and efficiency to the energy markets and society as a whole.
The Role: Processing thousands of rich data points per second from many and vastly different external sources, moving terabytes of data while processing it in real-time, running complex prediction and forecasting AI models while coupling their output into a hybrid human‑machine data refinement process and presenting the result through a nimble low‑latency SaaS solution used by customers around the globe is no small feat of science and engineering. This processing requires models that can survive the scrutiny of industry experts, data analysts and traders, with the performance, stability, latency and agility a fast‑moving startup influencing multi‑$m transactions requires.
The Data Production Team is responsible for all of Vortexa's data. It ranges from mixing raw satellite data from 600,000 vessels with rich but incomplete text data, to generating high‑value forecasts such as the vessel destination, cargo onboard, ship‑to‑ship transfer detection, dark vessels, congestion, future prices, etc. The team has built a variety of procedural, statistical and machine learning models that enabled us to provide the most accurate and comprehensive view of energy flows. We take pride in applying cutting‑edge research to real‑world problems in a robust, long‑lasting and maintainable way. The quality of our data is continuously benchmarked and assessed by experienced in‑house market and data analysts to ensure the quality of our predictions.
You will be instrumental in designing and building infrastructure and applications to propel the design, deployment, and benchmarking of existing and new pipelines and ML models. Working with software and data engineers, data scientists and market analysts, you will help bridge the gap between scientific experiments and commercial products by ensuring 100% uptime and bulletproof fault‑tolerance of every component of the team's data pipelines.
Requirements
- Experienced in building and deploying distributed scalable backend data processing pipelines that can go through terabytes of data daily using AWS, K8s, and Airflow.
- With solid software engineering fundamentals, fluent in both Java and Python (with Rust good to have).
- Knowledgeable about data lake systems like Athena, and big data storage formats like Parquet, HDF5, ORC, with a focus on data ingestion.
- Driven by working in an intellectually engaging environment with the top minds in the industry, where constructive and friendly challenges and debates are encouraged, not avoided.
- Excited about working in a start‑up environment: not afraid of challenges, excited to bring new ideas to production, and a positive can‑do will‑do person, not afraid to push the boundaries of your job role.
- Passionate about coaching developers, helping them improve their skills and grow their careers.
- Deep experience of the full software development life cycle (SDLC), including technical design, coding standards, code review, source control, build, test, deploy, and operations.
Awesome If You:
- Have experience with Apache Kafka and streaming frameworks, e.g., Flink.
- Familiar with observability principles such as logging, monitoring, and tracing.
- Have experience with web scraping technologies and information extraction.
Benefits
- A vibrant, diverse company pushing ourselves and the technology to deliver beyond the cutting edge.
- A team of motivated characters and top minds striving to be the best at what we do at all times.
- Constantly learning and exploring new tools and technologies.
- Acting as company owners (all Vortexa staff have equity options)- in a business‑savvy and responsible way.
- Motivated by being collaborative, working and achieving together.
- A flexible working policy- accommodating both remote.
Data Engineer (Multiple Roles) - AI SaaS employer: Vortexa
Contact Detail:
Vortexa Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Multiple Roles) - AI SaaS
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with Vortexa employees on LinkedIn. A personal touch can make all the difference when it comes to landing that interview.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those related to data processing and AI. This gives us a tangible way to see what you can bring to the table.
✨Tip Number 3
Prepare for the technical interview! Brush up on your coding skills in Java and Python, and be ready to discuss your experience with AWS and data pipelines. We love candidates who can demonstrate their problem-solving abilities.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're genuinely interested in joining our team at Vortexa.
We think you need these skills to ace Data Engineer (Multiple Roles) - AI SaaS
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with AWS, K8s, and Airflow, and don’t forget to mention your coding skills in Java and Python. We want to see how your background aligns with what we’re looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about working at Vortexa and how you can contribute to our mission. Be genuine and let your personality come through – we love seeing the real you!
Showcase Your Projects: If you've worked on any relevant projects, make sure to include them! Whether it's building data pipelines or using machine learning models, we want to see your hands-on experience. This is your opportunity to demonstrate your skills in action.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, it shows us you’re serious about joining our team at Vortexa!
How to prepare for a job interview at Vortexa
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, like AWS, K8s, and Airflow. Brush up on your Java and Python skills, and be ready to discuss how you've used these tools in past projects.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in building data processing pipelines. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight your ability to tackle complex problems.
✨Demonstrate Your Passion for Learning
Vortexa values a culture of continuous learning. Be ready to share examples of how you’ve kept up with industry trends or learned new technologies. This shows you're not just looking for a job, but are genuinely excited about growing in your field.
✨Engage in Technical Discussions
Expect to have technical discussions with your interviewers. Don’t shy away from asking questions about their current projects or challenges. This not only shows your interest but also helps you gauge if the company is the right fit for you.