At a Glance
- Tasks: Design and build AI workflows that power intelligent automation across our platform.
- Company: Join Command|Link, a revolutionary SaaS company transforming the IT industry.
- Benefits: Enjoy flexible time off, competitive salary, and fun events at cool locations.
- Other info: Be part of a high-growth company that values innovation and offers excellent career growth.
- Why this job: Make a real impact in AI while shaping the future of business communication.
- Qualifications: 2+ years in LLM applications, strong coding skills, and experience with complex datasets.
The predicted salary is between 60000 - 80000 £ per year.
About Command|Link
Command|Link is a global SaaS Platform providing network, voice services, and IT security solutions, helping corporations consolidate their core infrastructure into a single vendor and layering on a proprietary single pane of glass platform. Command|Link has revolutionized the IT industry by tackling the problems our competitors create. In recognition for our unprecedented innovation and dedication, Command|Link was recognised as the SD-WAN Product of the Year, ITSM Visionary Spotlight, UCaaS Product of the Year, NaaS Product of the Year, Supplier of the Year, and the AT&T Strategic Growth Partner. Command|Link has built the only IT platform for scale that solves ISP vendor sprawl and IT headaches. We make it easy for our customers to get more done, maximise uptime and improve the bottom line.
This is a 100% remote position.
About Your New Role
As an AI Engineer focused on LLM Systems, your primary mandate is to design, build, and operate the AI layer that powers intelligent automation across the CommandLink platform. You will be working at the engineering layer of agentic AI: building durable, production-grade LLM workflows on top of Temporal, implementing security and policy controls around LLM execution, and solving hard problems around prompt injection, output trust, and runtime governance in domain-specific contexts. You will work closely with Engineering and Product leads to build agentic workflows to execute deterministic workflows for context aware insights, triage, investigations and remediation into reliable, observable, and policy-compliant AI workflows. That means designing for failure, latency, and adversarial inputs from day one, not retrofitting safety controls after the fact. The space is moving fast, the problems are genuinely unsolved, and we are looking for someone who has strong opinions about how to build AI systems that are trustworthy in production.
Key Responsibilities
- Agentic workflow engineering: design and build multi-step LLM workflows using Temporal as the durable orchestration backbone; handling retries, state, parallelism, human-in-the-loop steps, and long-running agent execution.
- Domain-specific automation: work with subject matter experts to identify, scope, and implement AI-driven automation for specific business and operational domains; own the full delivery from prototype to production.
- LLM security and policy enforcement: implement runtime policy controls around LLM execution, including prompt injection mitigation, output validation, privilege separation (dual-LLM / quarantined execution patterns), and integration with policy engines.
- Parallel and live evaluation: build evaluation frameworks to assess LLM output quality in parallel with production traffic; implement continuous evals, regression detection, and automated quality gates.
- Prompt injection defense: apply and adapt state-of-the-art design patterns including the Dual LLM, Plan-Then-Execute, and Code-Then-Execute patterns to harden agent pipelines against adversarial inputs.
- Policy engine integration: integrate tools such as Sequrity.ai to define, enforce, and audit natural-language security policies over LLM tool use and execution paths.
- Observability and auditability: instrument AI workflows with full event history, structured logging of prompts and completions, cost tracking, and latency profiling making the behaviour of AI systems traceable and debuggable.
- LLM steering and control: implement output steering strategies, structured generation, constrained decoding, and fallback routing to ensure models behave within defined operational envelopes.
- Collaborate on architecture: work across the engineering team to define standards for how AI capabilities are integrated into the product setting patterns others will follow.
Essential What You’ll Need for Success
- Experience with complex and large datasets.
- 2+ years building production LLM-powered applications beyond RAG prototypes; real systems handling real failure modes.
- Hands-on experience with Temporal (or equivalent durable execution platforms such as Cadence or Conductor) for orchestrating multi-step, long-running AI workflows.
- Deep understanding of prompt injection attack vectors, mitigation strategies, and the trade-offs between defense patterns (Dual LLM, CaMeL / Code-Then-Execute, Action-Selector, context minimisation).
- Experience implementing policy controls and guardrails around LLM execution RBAC/PBAC for agents, output filtering, semantic validation, and tool-use restrictions.
- Practical experience building parallel evaluation pipelines for LLM outputs live evals, shadow scoring, regression suites, and automated quality gates.
- Strong software engineering fundamentals. You write maintainable, testable code; experience in Python and/or Go preferred.
- Familiarity with LLM APIs and inference providers (OpenAI, Anthropic, Mistral, or open-weight models via vLLM / Ollama).
- Understanding of agentic architecture patterns: tool use, multi-agent delegation, structured outputs, memory and context management.
- Experience integrating LLM systems with external tools and APIs in a secure, auditable way.
- Experience with langchain or other agentic frameworks.
Nice To Have
- Experience with dedicated policy engines for LLM security such as Sequrity.ai, LLM Guard, or equivalent TOML/rules-based policy frameworks.
- Familiarity with OWASP LLM Top 10 and NIST AI RMF compliance requirements.
- Experience with structured generation frameworks (Outlines, Instructor, Guidance) for constrained LLM outputs.
- Knowledge of chaos and adversarial testing for AI systems; red-teaming, jailbreak evaluation, and automated adversarial prompt suites.
- Experience with open-weight model deployment (vLLM, TGI, Ollama) and inference optimisation.
- Familiarity with MCP (Model Context Protocol) and other protocols for standardised agent tool integration.
- Background in security engineering, particularly application-layer threat modelling and or networking and device management.
- Takes on additional responsibilities and projects as needed to support the success of the team and organisation.
Why you’ll love life at Command|Link
- Join us at CommandLink, where you’ll have the opportunity to shape the future of business communication.
- We value the innovative spirit and seek individuals ready to bring their unique vision and expertise to a team that values bold ideas and strategic thinking.
- Are you ready to make an impact? Apply now and be the architect of your career as well as our clients’ success.
- Room to grow at a high-growth company.
- An environment that celebrates ideas and innovation.
- Your work will have a tangible impact.
- Flexible time off.
- Fun events at cool locations.
- Employee referral bonuses to encourage the addition of great new people to the team.
At CommandLink, we’re committed to creating a fair, consistent, and efficient hiring experience. As part of our process, we use AI-assisted tools to help review and analyse applications. These tools support our recruiting team by identifying qualifications and experience that align with the requirements of each role. AI tools are used only to assist in the evaluation process — they do not make final hiring decisions. Every application is reviewed by a member of our recruiting or hiring team before any decisions are made.
AI Engineer, LLM Systems & Agentic Workflows employer: CommandLink
Contact Detail:
CommandLink Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Engineer, LLM Systems & Agentic Workflows
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with Command|Link employees on LinkedIn. A friendly chat can sometimes lead to job opportunities that aren't even advertised!
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects related to LLM systems and AI workflows. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by diving deep into Command|Link's products and services. Understand their challenges and think about how your skills can help solve them. Tailoring your answers to their needs shows you're genuinely interested.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re serious about joining the Command|Link team!
We think you need these skills to ace AI Engineer, LLM Systems & Agentic Workflows
Some tips for your application 🫡
Tailor Your Application: Make sure to customise your CV and cover letter for the AI Engineer role. Highlight your experience with LLM systems and any relevant projects you've worked on. We want to see how your skills align with what we're looking for!
Showcase Your Problem-Solving Skills: In your application, share examples of how you've tackled complex problems in AI or software engineering. We love seeing candidates who can think critically and creatively about challenges, especially in the context of LLM workflows.
Be Clear and Concise: When writing your application, keep it straightforward. Use clear language and avoid jargon unless it's relevant to the role. We appreciate a well-structured application that gets straight to the point!
Apply Through Our Website: Don't forget to submit your application through our website! It’s the best way for us to receive your details and ensures you’re considered for the role. Plus, it makes the whole process smoother for everyone involved.
How to prepare for a job interview at CommandLink
✨Know Your LLMs Inside Out
Make sure you’re well-versed in the latest developments in large language models (LLMs). Brush up on your understanding of prompt injection attack vectors and mitigation strategies. Being able to discuss these topics confidently will show that you’re not just familiar with the technology, but that you can also think critically about its challenges.
✨Demonstrate Your Engineering Skills
Prepare to showcase your software engineering fundamentals. Be ready to discuss your experience with Python or Go, and how you've built maintainable, testable code in production environments. You might even want to bring examples of your work or be prepared for a coding challenge to demonstrate your skills live.
✨Familiarise Yourself with Temporal
Since this role involves orchestrating multi-step workflows using Temporal, it’s crucial to understand how it works. If you have hands-on experience with Temporal or similar platforms, be ready to share specific examples of how you’ve used them to solve complex problems in your previous projects.
✨Prepare Questions About the Role
Interviews are a two-way street! Prepare insightful questions about the company’s approach to AI security and policy enforcement. This not only shows your genuine interest in the role but also gives you a chance to assess if Command|Link is the right fit for you.