At a Glance
- Tasks: Design and optimise AI experiences using cutting-edge Large Language Models on Microsoft Azure.
- Company: Join a forward-thinking tech company at the forefront of AI innovation.
- Benefits: Enjoy a competitive salary, health benefits, and flexible hybrid working options.
- Why this job: Be part of a revolutionary field and shape the future of AI technology.
- Qualifications: Experience with prompt engineering and a solid understanding of LLM behaviour is essential.
- Other info: Collaborative environment with opportunities for continuous learning and career advancement.
The predicted salary is between 36000 - 60000 £ per year.
Work Location: London, Tunbridge Wells, Ipswich, Bolton
Role type: Permanent/Fixed Term/Contracting
Mode of working: Hybrid / office based (3 days in office)
Number of positions: 1
Duration of assignment: 06 Months
The Role:
As a Prompt Engineer, you will design, implement, and optimize conversational and generative AI experiences powered by Large Language Models (LLMs) on Microsoft Azure. You will craft robust prompt strategies (system prompts, few-/zero-shot prompts, tool-use instructions), implement prompt chaining for multi-step reasoning, and integrate model outputs into enterprise applications via secure APIs. You will collaborate closely with product owners, solution architects, data engineers, and application developers to translate business objectives into high-quality AI outcomes. A working understanding of Retrieval-Augmented Generation (RAG) is essential to ground model responses in authoritative enterprise content and to reduce hallucinations. This role blends hands-on engineering with rigorous experimentation, evaluation, and continuous improvement.
Your responsibilities:
- Author, test, and refine system, developer, and user prompts to achieve reliable, safe, and consistent outputs.
- Implement prompt chaining and multi-turn orchestration patterns for complex workflows (reasoning, planning, tool use, and validation).
- Build LLM-powered features on Azure (e.g., Azure OpenAI, Azure Functions).
- Utilize and manage RESTful APIs/SDKs to integrate model calls within web services, back-end jobs, and enterprise applications.
- Design and implement RAG pipelines (chunking, embeddings, indexing, ranking/citation policies) to ground responses in approved content stores.
- Establish offline/online evaluation frameworks (accuracy, safety, faithfulness, latency, cost), create test datasets, and run A/B or canary experiments.
- Monitor production behavior, analyze conversations, and iterate on prompts and retrieval strategies to improve outcomes.
- Enforce content safety, PII handling, data privacy, and role-based access; follow Responsible AI practices and organizational guardrails.
- Partner with architects and engineers to define LLM interfaces, token/cost budgets, and observability.
Your Profile:
Essential skills/knowledge/experience:
- Hands-on experience crafting prompts (system role design, few-/zero-shot, tool-use instructions) and prompt chaining for multi-step tasks.
- Strong understanding of LLM behaviour (context windows, tokens, temperature/top-p, function/tool calling, safety filters).
- Understanding Prompt Injection and other security aspects of AI.
- Practical experience deploying LLM solutions on Azure (e.g., Azure OpenAI, Azure Functions, App Service, Key Vault).
- Proficiency with REST APIs and JSON; integrating LLM calls into applications/services using Python or C# (Node.js also acceptable).
- Working knowledge of embeddings, document chunking strategies, indexing, semantic search, and citation/grounding techniques.
- Experience with vector databases (e.g., Azure Cosmos DB vector search, Redis Enterprise, Pinecone) and reranking strategies.
- Experience with Git and CI/CD (Azure DevOps or GitHub), unit/integration testing for LLM pipelines, and environment/config management.
- Ability to measure and optimize latency, throughput, and cost (token budgeting, caching, retries, and fallbacks).
- Exposure to conversation design, guardrail UX, human-in-the-loop review workflows, and prompt libraries/pattern catalogs.
Desirable skills/knowledge/experience:
- Document processing/ETL skills to prepare high-quality corpora for retrieval grounding.
- Familiarity with LLM evaluation frameworks, prompt-quality metrics, red-teaming, and hallucination/safety monitoring.
- Knowledge of MLOps patterns, experiment tracking, feature stores, and observability (logging, tracing, metrics) for LLM apps.
Prompt Engineer employer: Infoplus Technologies UK Ltd
Contact Detail:
Infoplus Technologies UK Ltd Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Prompt Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your prompt engineering projects, especially those involving LLMs and Azure. This will give you an edge and demonstrate your hands-on experience to potential employers.
✨Tip Number 3
Prepare for interviews by brushing up on common questions related to prompt engineering and LLMs. Practice explaining your thought process when crafting prompts and integrating APIs, as this will help you stand out during technical discussions.
✨Tip Number 4
Don't forget to apply through our website! We often have exclusive listings and opportunities that you won't find elsewhere. Plus, it shows you're genuinely interested in joining our team at StudySmarter.
We think you need these skills to ace Prompt Engineer
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter for the Prompt Engineer role. Highlight your experience with LLMs, prompt crafting, and any relevant projects you've worked on. We want to see how your skills align with what we're looking for!
Showcase Your Technical Skills: Don’t hold back on showcasing your technical prowess! Mention your hands-on experience with Azure, REST APIs, and any programming languages like Python or C#. We love seeing practical examples of your work, so feel free to include links to projects or GitHub repos.
Be Clear and Concise: When writing your application, clarity is key. Use straightforward language and avoid jargon unless it’s relevant to the role. We appreciate a well-structured application that gets straight to the point while still showing off your personality!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, you’ll find all the details about the role and our company culture there!
How to prepare for a job interview at Infoplus Technologies UK Ltd
✨Know Your Prompts
Before the interview, brush up on your experience with crafting prompts and prompt chaining. Be ready to discuss specific examples of how you've designed prompts for multi-step tasks and the outcomes you achieved. This will show your hands-on expertise and understanding of LLM behaviour.
✨Showcase Your Technical Skills
Make sure you can talk confidently about your experience with Azure and REST APIs. Prepare to explain how you've integrated LLM calls into applications using Python or C#. Highlight any projects where you’ve implemented RAG pipelines or worked with vector databases, as this will demonstrate your technical prowess.
✨Understand the Business Context
Familiarise yourself with the company's objectives and how your role as a Prompt Engineer fits into their vision. Be prepared to discuss how you would translate business needs into high-quality AI outcomes, showcasing your ability to collaborate with product owners and solution architects.
✨Prepare for Scenario Questions
Expect scenario-based questions that test your problem-solving skills. Think about how you would handle issues like prompt injection or data privacy concerns. Practising these scenarios will help you articulate your thought process and demonstrate your commitment to responsible AI practices.