Web Search★★★★4/5
TavilyMCP
Tavily's MCP server goes beyond basic web search with four specialized tools: search, extract, map, and crawl. The search returns structured results optimized for LLM consumption. The extract tool pulls clean content from specific URLs. The map tool discovers site architecture. The crawl tool systematically processes entire sites.
Designed specifically for RAG (Retrieval-Augmented Generation) pipelines, the structured output format feeds directly into AI workflows without additional parsing. Available as both a remote hosted server and local npx installation.
Free tier includes 1,000 monthly credits. The crawl and map tools are what set this apart from pure search servers.
Pros
- + 4 specialized tools: search, extract, map, crawl
- + Remote server available — no local install needed
- + 1,000 free monthly credits
- + Structured output designed for RAG pipelines
- + Website mapping for site architecture discovery
Cons
- - Credits-based pricing — heavy crawling burns through quickly
- - Less general than Brave (focused on research/extraction)
- - Smaller community than Context7 or Playwright
How We Use It
Tavily fills a different niche than Brave in our pipeline. Where Brave is for broad search queries, Tavily is for targeted extraction. The AI News Digest workflow already uses Jina Reader for article extraction, but Tavily's extract tool handles the edge cases that Jina struggles with — JavaScript-rendered content, paywalled previews, and sites that block known scraper user agents.
The crawl tool has been useful for documentation research. When evaluating a new tool or library, Claude can map out the entire docs site structure first, then selectively extract the pages that matter. More efficient than feeding it one URL at a time and hoping it finds the right pages.
searchresearchcrawlingextractionRAG