Adding Guardrails to Your Swift AI App: What They Are and How to Use Them
Learn how to implement input and output guardrails in your Swift project to validate user prompts and LLM responses, ensuring your AI-powered app stays safe, fast, and reliable.
There are two major challenges when building AI-powered apps - especially ones where users input freeform text prompts to a Large Language Model (LLM):
1. Natural Language Input
We need a way to monitor what users are typing to ensure it aligns with the app’s expectations. Sometimes that just means checking that the input isn’t toxic or inappropriate (especially important in social or professional apps). But it can also mean enforcing stricter rules. Take an LLM-powered HR tool, for example. You might want to prevent users from accidentally sending personally identifiable information (PII) to a third-party model hosted by OpenAI or Google. Once that data goes to an external API, it’s out of your hands.
2. Model Output
LLMs are non-deterministic - ask them the same question twice and you might get two different answers. So how do we ensure the output is what we expect? Say your app asks the model to respond in haiku form. You’ll want to validate that the output is actually a haiku - not a paragraph, not a limerick, just a haiku.
Enter Guardrails
That’s where guardrails come in. As described in OpenAI’s AIAgentSDK:
So… how do we add Guardrails in Swift?
Let’s walk through just how simple it is to implement these guardrails in a Swift-based AI workflow.