Inference Scaling for Long-Context RAG

Oct 20, 2024 · 12m 17s
Inference Scaling for Long-Context RAG
Description

🗓 Inference Scaling for Long-Context Retrieval Augmented Generation This research paper explores the effectiveness of inference scaling for retrieval augmented generation (RAG), a technique that enhances large language models (LLMs)...

show more
🗓 Inference Scaling for Long-Context Retrieval Augmented Generation

This research paper explores the effectiveness of inference scaling for retrieval augmented generation (RAG), a technique that enhances large language models (LLMs) by incorporating external knowledge. The authors introduce two strategies, demonstration-based RAG (DRAG) and iterative demonstration-based RAG (IterDRAG), for effectively scaling inference computation. They demonstrate that increasing inference computation, when optimally allocated, leads to nearly linear gains in RAG performance. Furthermore, they develop a computation allocation model to predict the optimal test-time compute allocation for various tasks and scenarios, showcasing its effectiveness in achieving performance gains and aligning with experimental results.

📎 Link to paper
show less
Information
Author Shahriar Shariati
Organization Shahriar Shariati
Website -
Tags

Looks like you don't have any active episode

Browse Spreaker Catalogue to discover great new content

Current

Podcast Cover

Looks like you don't have any episodes in your queue

Browse Spreaker Catalogue to discover great new content

Next Up

Episode Cover Episode Cover

It's so quiet here...

Time to discover new episodes!

Discover
Your Library
Search