When you should use this server

  • Scrape and extract content from web pages or entire domains.
  • Perform batch scraping jobs with built-in rate limiting.
  • Search the web and optionally extract structured content from results.
  • Conduct deep research using a combination of crawling, search, and LLM-powered summarization.
  • Generate standardized llms.txt and llms-full.txt files to guide LLM interactions with your site.
  • Integrate content discovery, crawling, and extraction into AI-driven workflows.

Key features

  • Web scraping with custom wait times and content filters
  • Batch scraping with parallel execution
  • Deep web crawling with configurable depth and limits
  • Structured data extraction from web pages
  • LLM-powered research and summarization
  • Generation of standardized LLM guidance files

Authentication

  • Method: API Key (FIRECRAWL_API_KEY)
  • Notes: Store securely in environment variables; available from the Firecrawl dashboard

Endpoints

  • Remote hosted URL: https://mcp.firecrawl.dev/{FIRECRAWL_API_KEY}/v2/sse
  • Local via npx: env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
  • Manual installation: npm install -g firecrawl-mcp

Setup & usage

Environment variables

  • FIRECRAWL_API_KEY=fc-YOUR_API_KEY # Required
  • FIRECRAWL_API_URL=https://custom-endpoint.example.com # Optional, for self-hosted deployments
  • FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Optional, default retry configuration
  • FIRECRAWL_RETRY_INITIAL_DELAY=1000 # Optional, in milliseconds
  • FIRECRAWL_RETRY_MAX_DELAY=30000 # Optional, in milliseconds
  • FIRECRAWL_RETRY_BACKOFF_FACTOR=2 # Optional, exponential backoff multiplier
  • FIRECRAWL_CREDIT_WARNING_THRESHOLD=10 # Optional, percentage threshold for credit warnings
  • FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=5 # Optional, percentage threshold for critical credit alerts

Tools provided

firecrawl_scrape

Scrape content from a single URL with options for format, wait time, and content filtering.

firecrawl_batch_scrape

Scrape multiple URLs in parallel; returns a batch operation ID for status tracking.

firecrawl_check_batch_status

Check the progress or results of a batch scrape operation. Run a web search; optionally scrape and extract structured content from search results.

firecrawl_crawl

Launch an asynchronous crawl with configurable depth, limits, and external link rules.

firecrawl_extract

Extract structured information from pages using an LLM and a defined schema (supports both cloud and self-hosted LLMs).

firecrawl_deep_research

Perform deep research on a query using crawling, search, and LLM analysis with constraints (depth, time, max URLs).

firecrawl_generate_llmstxt

Generate standardized llms.txt and llms-full.txt files to define how LLMs should interact with a site.

Notes

  • Supports both cloud-hosted and self-hosted deployments.
  • Includes robust logging for operation status, rate limits, and credit usage.
  • Automatic retries and backoff are built-in for error handling.
  • Returned data respects configured API quotas and credit limits.