At a Glance
- Tasks: Build and maintain data pipelines for AI and analytics in a remote role.
- Company: Join a leading consulting firm in the Financial Services industry.
- Benefits: Earn $25-30/hour with flexible remote work and potential contract extension.
- Why this job: Make an impact by ensuring high-quality data drives decision-making.
- Qualifications: Experience in data pipelines, Salesforce projects, and strong SQL/Python skills.
- Other info: Collaborate with teams to streamline operations and enhance data governance.
The predicted salary is between 20 - 24 £ per hour.
We are hiring 2 Salesforce Data Engineers for one of our Consulting clients in the Financial Services industry. This is a remote contract and can be based from any nearshore/LATAM location - the customer is based in the US and working hours will be EST. The contract is 40 hours/week starting from the beginning of March for at least 2 months, with potential for extension - paid at an hourly rate of $25-30/h. Professional English proficiency is required.
This person will build and maintain the systems that power AI, analytics, and data driven decision making. This role focuses on creating and orchestrating efficient data pipelines, organizing data for scale, and ensuring data is clean, secure, and ready for use across the business. The work supports BI, Operations, System Integrations and AI practices by ensuring high quality data is consistently available.
Primary Responsibilities- Design, build, and maintain data pipelines that support efficient collection, ingestion, storage, and processing.
- Implement modern data architectures such as data lakes, data warehouses, lakehouses, and data mesh platforms.
- Develop streaming data flows for near real time and low latency use cases.
- Clean and prepare data to support analytics, reporting, and AI model readiness.
- Improve performance and reliability across data systems.
- Apply data governance and security best practices to safeguard customer information.
- Partner with technical and business teams to understand requirements and deliver effective solutions.
- Identify opportunities to streamline operations and reduce cost through smarter data design.
- Monitor and resolve issues to maintain dependable, resilient data operations.
- Experience building and maintaining data pipelines, and ETL/ELT scalable frameworks.
- Experience in Salesforce projects handling data migrations and integrations within the platform.
- Strong foundation in relational and non-relational data systems.
- Strong data modeling skills.
- Working knowledge of data lake, data warehouse, and lakehouse patterns.
- Hands-on experience with both batch and streaming data pipelines.
- Proficiency in SQL, Python and modern data engineering tools and libraries, such as Pandas.
- Ability to design structured, scalable solutions for analytics and AI preparation.
- Familiarity with cloud platforms and distributed processing frameworks.
- Clear, concise communication skills.
- Experience with Databricks, Snowflake, Microsoft Synapse, Fabric, AWS Glue, DMS, or similar data platforms and technologies.
- Experience with Open Data platforms and tools, such as Apache Spark, Airflow, Delta Lake, or Iceberg.
- Background supporting Data migrations, API integrations, and Machine Learning or AI data requirements.
- Understanding of data governance, lineage, and secure data practices.
- Exposure to a data product mindset and domain oriented or data mesh approaches.
Reach out to learn more!
Salesforce Data Engineer LATAM in London employer: Stott and May
Contact Detail:
Stott and May Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Salesforce Data Engineer LATAM in London
✨Tip Number 1
Network like a pro! Connect with folks in the industry on LinkedIn, join relevant groups, and don’t be shy to reach out for informational chats. You never know who might have the inside scoop on job openings!
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your data pipelines and projects. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your past projects and how you tackled challenges – it’s all about demonstrating your problem-solving skills!
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities that might just be the perfect fit for you. Plus, it’s a great way to get noticed by hiring managers directly.
We think you need these skills to ace Salesforce Data Engineer LATAM in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Salesforce Data Engineer role. Highlight your experience with data pipelines, ETL/ELT frameworks, and any relevant projects you've worked on. We want to see how your skills match what we're looking for!
Showcase Your Skills: In your application, don't just list your skills—show us how you've used them! Whether it's SQL, Python, or working with data lakes, give us examples of how you've applied these in real-world scenarios. This helps us understand your hands-on experience.
Keep It Clear and Concise: When writing your application, clarity is key! Use straightforward language and avoid jargon unless it's necessary. We appreciate a well-structured application that gets straight to the point, making it easier for us to see your potential.
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It streamlines the process for us and ensures your application lands in the right hands. Plus, it’s super easy to do!
How to prepare for a job interview at Stott and May
✨Know Your Data Pipelines
Make sure you can talk confidently about your experience with building and maintaining data pipelines. Be ready to discuss specific projects where you've implemented ETL/ELT frameworks, as well as any challenges you faced and how you overcame them.
✨Brush Up on Your Tools
Familiarise yourself with the tools mentioned in the job description, like Databricks, Snowflake, and Apache Spark. If you’ve used them before, prepare examples of how you applied these technologies to solve real-world problems.
✨Showcase Your Communication Skills
Since clear communication is key in this role, practice explaining complex data concepts in simple terms. Think of scenarios where you had to collaborate with technical and non-technical teams and how you ensured everyone was on the same page.
✨Prepare for Scenario Questions
Expect questions that ask you to solve hypothetical problems related to data governance or pipeline performance. Prepare by thinking through your approach to ensuring data quality and security, and be ready to share your thought process during the interview.