[6] Fellowship of the AI Rings:  What are the parts of an AI system?

AI tip of the week

Give your AI product an image or diagram and ask, “What insights can I derive from this?”

So far in our newsletter series, we’ve learned some of the things AI is doing well and some of the things it’s not. We’ve learned at a high-level how the overall system works.

But how do all of these things fit together?  Simple:  it’s the Fellowship of the AI Rings.

No, we’re not going to talk about hobbits and dragons and elves. 

What we will identify are five interconnected “rings” of an AI system, and talk about how they work together:

1) Model Creation/Training/Selection
2) User Interaction
3) Guardrails
4) Contextual Enrichment (e.g. RAG)
5) Agentification

1) Model Creation/Training/Selection

The first ring is the model itself.   We talked about this one two weeks ago (in “What is AI, anyway?”). 

All AI systems have a model at their core – a set of algorithms trained on some sort of data.  It’s sort of like the brains of the whole thing.

There are many types of models – each designed to do things like understanding language, recognizing patterns, or making predictions.

The system we’re trying to build helps guide the choices we make during model training:

  • What data are we using?
  • Is it diverse and unbiased?
  • How well does it represent “the world?”
  • What kind of model architecture should we use?

There are other choices we need to make at this stage- again, depending on what we’re trying to do.  If we want to build a medical diagnostic system, we’re going to want to prioritize accuracy.  If we want to generate marketing copy, or spark ideas for new artwork, creativity would be much higher a priority for us.

Especially with how quickly the AI space is moving, we can’t just make this decision once and then forget about it.  As we learn more about our system’s users – and the sorts of problems they’re seeking to solve with our system – we’re going to want to update and improve this ring.  We might need to retrain our model, or even fine-tune with updated data.

The other thing too is that most of us (including the two of us) – aren’t often (and shouldn’t often) touch model creation.  Oftentimes, our role is more in selecting a model best suited to our user needs – considering things like cost, accuracy, performance, and scalability.  Someone has to hook the model up to the rest of the system, so we’d do well to look at how easy it’s going to be to integrate the model too.

2) User Interaction

Next up is the second ring – how our users are going to interact with the model. 

We usually do this with APIs or user interfaces:  our system takes a user’s input—a question, a command, or data—and provides a response.

Does the system respond in a way that the user can understand? Does our system say what it can and can’t do? A responsible AI system ensures that users not only get answers but also know how those answers were derived.

(Side note:  there are LOTS of roles for great UX designers in this stage – how can we make the UX intuitive, accessible, and transparent so that users feel empowered rather than overwhelmed.)

Transparency is key here. Users should know what the system can do, what it can’t, and what assumptions it makes – e.g. say when our system is uncertain about an answer.  Error messages, confidence scores, and citing sources can help.

3) Guardrails

It’s almost never a good idea, however, to just take a user prompt and hand it directly to a model.  Nor do we REALLY want to take the raw output from our model, and hand it directly back to the user.

We’re going to want to set up a way to reject harmful prompts, steer clear of confusing outputs, and ensure that even non-PhDs like us can use the system. 

We want a healthcare assistant to limit its responses to provide only factual information – and avoid speculative advice. An “art assistant” might do the opposite.  (Yes, cats CAN ride on unicorns.)

We put guardrails on our system, trying to ensure the AI behaves responsibly, even under challenging circumstances.

For inputs, this might mean filtering prompts for inappropriate, biased, or harmful content before they reach the model. A moderation filter would stop our model from responding to prompts promoting hate speech or violence.

For outputs,  guardrails include things like bias detection, safety filters, and explainability tools. A good system doesn’t just generate responses—it evaluates them for quality, fairness, and appropriateness.

Setting up feedback loops is important here.  We’re going to want to monitor our system, and prioritize customer feedback.

4) Contextual Enrichment (e.g. RAG)

As we’ve already discussed, there are many things that AI, and especially Gen AI, does NOT do well.

We know that its responses may often may not be 100% factual. 

It might be out of date (since it doesn’t “know” anything about events and facts that came after its training data).

Most models aren’t designed to be precise with numerical calculations.

Their output may seem so “truthy” about thorny or nuanced issues – that you’d really need to be an expert to catch their errors.

This is where we can “phone a friend” and bring in (or prioritize) additional, more relevant data or tools.  to make its responses more accurate and relevant.

Much of the time, we’ll use something called “Retrieval-Augmented Generation” (or, RAG), so that our system can pull in real-time or domain-specific information on top of the model. 

If we do this right, we can evolve the responses from our static model into responses that stays relevant over time. We can do this by integrating APIs, knowledge graphs, or user profiles.

An example of this could be having an AI “legal assistant” retrieve and prioritize case precedents from an internal database.

There are other ways to do contextual enhancement, beyond RAG.   We can also integrate our system  with APIs for real-time information (e.g. stock prices or weather), or leveraging knowledge graphs (that map the relationships between entities). 

We can also use info from user profiles to add context, and tailor responses to the person asking for it.  This could be something like fitness app that recommends workouts according to a user’s exercise history and goals. 

5) Agentification

We mentioned that there are 5 rings.  The fifth ring is agentification.  We’re so excited about this one, we’re going to hold off and write a blog just about this.  So – wait for it!

Thanks for reading, friends!  Catch you next week!

✨Dona & Jeremiah ✨

For previous words, see our archive here: https://3rdrodeoai.com/words/​​