At a Glance
- Tasks: Design and build infrastructure for data pipelines and ML models in a fast-paced environment.
- Company: Vortexa, a pioneering tech company transforming the energy industry with AI and satellite data.
- Benefits: Flexible working policies, health insurance, and opportunities for professional growth.
- Other info: Collaborate with top industry minds in an innovative and diverse workplace.
- Why this job: Join a dynamic team tackling real-world challenges with cutting-edge technology.
- Qualifications: Fluent in Python, experienced in scalable data processing and backend systems.
The predicted salary is between 36000 - 60000 ÂŁ per year.
About Vortexa
Vortexa is a fast-growing international technology business founded to solve the immense information gap that exists in the energy industry. By using massive amounts of new satellite data and pioneering work in artificial intelligence, Vortexa creates an unprecedented view on the global seaborne energy flows in real‑time, bringing transparency and efficiency to the energy markets and society as a whole.
Processing thousands of rich data points per second from many vastly different external sources, moving terabytes of data while processing it in real‑time, running complex prediction and forecasting AI models while coupling their output into a hybrid human‑machine data refinement process and presenting the result through a nimble low‑latency SaaS solution used by customers around the globe is no small feat of science and engineering. This processing requires models that can survive the scrutiny of industry experts, data analysts and traders, with the performance, stability, latency and agility a fast‑moving startup influencing multi-$m transactions requires.
The Data Platform Team is responsible for all of Vortexa’s data. The team’s ownership ranges from raw satellite AIS/imaging data to unstructured textual and graphical maritime data like fixtures, port lineups, and customs filings. The team is also responsible for highly structured datasets such as price and supply‑demand forecasts as well as modeling the global energy flows and tanker fleet distributions.
The team has built a variety of procedural, statistical and machine learning models that enabled us to provide the most accurate and comprehensive view of energy flows. We take pride in applying cutting‑edge research to real‑world problems in a robust, long‑lasting and maintainable way. The quality of our data is continuously benchmarked and assessed by experienced in‑house market and data analysts to ensure the quality of our predictions.
About the Role
You’ll be instrumental in designing and building infrastructure and applications to propel the design, deployment, and benchmarking of existing and new pipelines and ML models. Working with software and data engineers, data scientists and market analysts, you’ll help bridge the gap between scientific experiments and commercial products by ensuring 100% uptime and bulletproof fault‑tolerance of every component of the team’s data pipelines.
Key Responsibilities
- Design, build, operate and benchmark infrastructure and applications for data pipelines and ML models.
- Collaborate with software, data engineers, data scientists and market analysts to deliver production‑ready solutions.
- Ensure 100% uptime and fault‑tolerance for all data pipeline components.
Requirements
- Fluent in Python and software engineering fundamentals, comfortable with highly scalable data processing libraries.
- Strong expertise in distributed systems, micro‑service architectures and scalable data processing pipelines.
- Experience building distributed heavy‑load backend systems that process terabytes of data daily.
- Deep experience of the full software development life cycle (SDLC), including technical design, coding standards, code review, source control, build, test, deploy, and operations.
- Passionate about coaching developers, helping them improve their skills and grow their careers.
Nice to Have
- Experience in Rust / Java / Kotlin.
- Experience with AWS, Apache Kafka, Kafka Streams, Apache Beam / Flink / Spark, deployment, monitoring & debugging.
- Productisation of Machine Learning research projects.
- Familiarity with Airflow or other workflow orchestration tools, Kubernetes.
- Knowledge of data lake systems and file formats like Parquet, Orc, Athena.
- Relevant AWS or Kafka certifications.
- Team‑oriented mindset, motivated to collaborate and achieve together.
- Flexibility for remote & home working; participation in regular staff events.
- Private Health Insurance offered via Vitality.
- Global Volunteering Policy.
- Acting as company owners with equity options in a business‑savvy manner.
Job Details
- Seniority level: Entry level
- Employment type: Full‑time
- Job function: Engineering and Information Technology
- Industries: IT Services and IT Consulting
#J-18808-Ljbffr
Software Engineer- Python employer: Vortexa
Contact Detail:
Vortexa Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Software Engineer- Python
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with Vortexa employees on LinkedIn. A personal connection can make all the difference when it comes to landing that interview.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your Python projects, especially those involving data processing or machine learning. This will give you an edge and demonstrate your hands-on experience to potential employers.
✨Tip Number 3
Prepare for technical interviews by brushing up on distributed systems and scalable data pipelines. Practice coding challenges and be ready to discuss your past projects in detail. We want to see how you think and solve problems!
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're genuinely interested in joining the Vortexa team and contributing to their mission.
We think you need these skills to ace Software Engineer- Python
Some tips for your application 🫡
Show Your Python Skills: Make sure to highlight your Python expertise in your application. We want to see how you've used it in real-world projects, especially in scalable data processing or backend systems.
Tailor Your Experience: Don’t just send a generic CV! Tailor your experience to match the job description. We love seeing how your background aligns with our needs, especially in distributed systems and microservices.
Be Clear and Concise: Keep your application clear and to the point. We appreciate straightforward communication, so make sure your skills and experiences shine without unnecessary fluff.
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s the easiest way for us to keep track of your application and ensure it reaches the right team!
How to prepare for a job interview at Vortexa
✨Know Your Python Inside Out
Make sure you brush up on your Python skills before the interview. Vortexa is looking for someone fluent in Python, so be prepared to discuss your experience with scalable data processing libraries and how you've used Python in past projects.
✨Understand Distributed Systems
Familiarise yourself with distributed systems and microservices. Be ready to explain how you've built scalable data pipelines and the challenges you've faced. This will show that you can handle the engineering challenges Vortexa presents.
✨Showcase Your Problem-Solving Skills
Vortexa values innovative thinking, so come prepared with examples of how you've tackled complex problems in previous roles. Discuss any experience you have with machine learning models and how you've applied them to real-world scenarios.
✨Be Ready to Collaborate
Since you'll be working closely with software engineers, data scientists, and market analysts, demonstrate your ability to collaborate effectively. Share experiences where teamwork led to successful outcomes, and express your enthusiasm for learning from others in a diverse environment.