• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Microcontroller Tips

Microcontroller engineering resources, new microcontroller products and electronics engineering news

  • Products
    • 8-bit
    • 16-bit
    • 32-bit
    • 64-bit
  • Applications
    • 5G
    • Automotive
    • Connectivity
    • Consumer Electronics
    • EV Engineering
    • Industrial
    • IoT
    • Medical
    • Security
    • Telecommunications
    • Wearables
    • Wireless
  • Learn
    • eBooks / Tech Tips
    • EE Training Days
    • FAQs
    • Learning Center
    • Tech Toolboxes
    • Webinars/Digital Events
  • Resources
    • Design Guide Library
    • LEAP Awards
    • Podcasts
    • White Papers
  • Videos
    • EE Videos & Interviews
    • Teardown Videos
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • Engineering Training Days
  • Advertise
  • Subscribe

When should you use RAG, TAG, and RAFT AI?

August 20, 2025 By Jeff Shepard Leave a Comment

Retrieval-augmented generation (RAG) and table-augmented generation (TAG) are both techniques to improve the ability of artificial intelligence (AI) to generate accurate and relevant information by leveraging external data. Other choices include retrieval-augmented fine-tuning (RAFT) and retrieval-centric generation (RCG).

Understanding when to use RAG, TAG, RAFT, and RCG can be crucial to successful and efficient AI implementations. All are focused on improving the performance of large language models (LLMs). LLMs generate responses based on training data that may be outdated or incomplete. RAG, TAG, RAFT, and RCG are ways to address those limitations.

RAG is focused on retrieving and incorporating information from unstructured data sources like documents and web pages. TAG is focused on querying and leveraging structured data from databases.

RAG begins with the addition of new information beyond the original training dataset, usually gathered from external sources. When a query is submitted, it’s converted into a vector representation, like the operation of generic LLMs. The vector representation is matched to stored vectors in the knowledge database.

LLMs are inherently non-deterministic and might not produce the same output for a given query. Prompt engineering is needed for producing consistent responses. In RAG, prompt engineering is used to incorporate relevant external data to enhance the model’s contextual understanding with the goal of producing more detailed and (hopefully) insightful responses.

A key to optimal RAG operation is keeping the external database as current as possible with periodic updates. That enables the system to deliver the most relevant responses, even in the absence of time-consuming and expensive training updates (Figure 1).

Figure 1. Flow chart of the operation of RAG. (Image: GeeksforGeeks)

How does TAG work?

While RAG is especially useful for accessing information not present in the original training dataset, TAG can be used in applications like enhancing search engine capabilities, especially in scenarios involving structured data and complex queries. TAG is implemented in a series of steps (Figure 2).

  • The user submits a query, for example, to a search engine.
  • The system identifies and retrieves relevant data, possibly using SQL queries to find specific information in a table or database.
  • Prompt engineering is used to incorporate the retrieved data into the user query, creating a more detailed ‘augmented prompt.’
  • The augmented prompt is used by the LLM to generate a response that is more precise and focused than possible using only the original query.
Figure 2. Example of a TAG implementation and data flows. (Image: K2view)

TAG is more appropriate than RAG for applications like querying databases and filtering data based on multiple criteria. TAG is less computationally intensive than RAG and can be more efficient when dealing with large datasets and complex queries.

Improving RAGs

RAFT and RAG are both approaches that leverage external knowledge to improve LLM performance. RAG dynamically integrates external data sources into the LLM’s response generation process.

Fine-tuning is a type of continuous improvement of the LLM itself. RAFT involves additional training of an LLM to improve its performance on a particular task or in a specific domain. The process modifies the LLM’s internal parameters to better align with the nuances of a specific task.

RAFT can be especially useful in situations involving dynamic information environments and applications that require nuanced responses. However, it requires high-quality data and is computationally demanding. If not properly implemented, it can lead to the loss of previously learned general knowledge, called catastrophic forgetting.

Retrieval-centric generation

RCG is another approach to improving the performance of LLMs. It’s particularly used for interpreting complex indexed or curated data. RAG and RCG can both be used to get information from curated sources during inference. While the model is the major source of information in RAG and is aided by the incremental data, in RCG, most of the data is external to the model.

Instead of augmenting LLM performance, RCG focuses on prioritizing data to constrain the response (Figure 3).

  • RAG is designed for tasks that need to combine general knowledge with specific information from external data sources and answer complex questions.
  • RCG is optimized for maintaining context, style, and accuracy of the original information, such as summarization, paraphrasing, or creating consistent content.
Figure 3. Comparison of RAG and RCG LLM implementations. (Image: Intel Labs)

Summary

RAG is designed for using information from unstructured data sources like documents and web pages. TAG is focused on querying and leveraging structured data from sources like tables or databases. Extensions of RAG include RAFT, which provides additional training of an LLM to improve its performance on a particular task or in a specific domain, and RCG, which maintains the context, style, and accuracy of the original information.

References

From RAG to TAG — Document-Centric RAG to Table Augmented Generation, Gigaspaces
GenAI Architecture Shifting from RAG Toward Interpretive Retrieval-Centric Generation (RCG) Models, Intel Labs
RAG vs. Fine-Tuning: How to Choose, Oracle
RAG  vs TAG: A Deep Dive, 2090 OK
Retrieval Augmented Fine-Tuning: Adapting LLM for Domain-Specific RAG Excellence, Galileo
What is Agentic RAG, Weaviate
What is RAG, IBM
What is Retrieval-Augmented Generation (RAG)?, GeeksforGeeks
What is Table Augmented Generation (TAG)?, K2view

EE World related content

How do AI agents and model context protocol work together?
What’s the difference between GPUs and TPUs for AI processing?
How can neuromorphic devices be harnessed in edge AI computing?
What is the mathematics behind artificial intelligence?
What determines the size of the dataset needed to train an AI?

You may also like:


  • What kinds of tools are available for optimizing edge AI…

  • What are the hardware strategies for building energy-efficient AI accelerators?

  • How to select and place antennas in IoT devices

  • How can neuromorphic devices be harnessed in edge AI computing?

  • What is ‘compute-in-memory’ and why is it important for AI?

Filed Under: Artificial intelligence/ML, FAQ, Featured Tagged With: FAQ

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Featured Contributions

Can chiplets save the semiconductor supply chain?

Navigating the EU Cyber Resilience Act: a manufacturer’s perspective

The intelligent Edge: powering next-gen Edge AI applications

Engineering harmony: solving the multiprotocol puzzle in IoT device design

What’s slowing down Edge AI? It’s not compute, it’s data movement

More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: Aerospace & Defense
Modern defense and aerospace systems demand unprecedented sophistication in electronic and optical components. This Tech ToolBox explores critical technologies reshaping several sectors.

EE Learning Center

EE Learning Center

EE ENGINEERING TRAINING DAYS

engineering
“bills
“microcontroller
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

Footer

Microcontroller Tips

EE World Online Network

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips

Microcontroller Tips

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy