Robots.txt Generator — Create Your Robots.txt File

Build a properly formatted robots.txt file for your website. Add custom rules for different crawlers, set sitemaps, and configure crawl delays — all generated instantly in your browser.

User-Agent
Directive
Sitemap URL
Crawl-Delay (seconds, 0 = none)
Generated robots.txt

Frequently Asked Questions

What is robots.txt?

Robots.txt is a plain text file placed at the root of your website (e.g., example.com/robots.txt) that tells search engine crawlers which pages or sections they are allowed or not allowed to access. It follows the Robots Exclusion Protocol, a standard that all major search engines respect.

Where does robots.txt go?

The robots.txt file must be placed in the root directory of your domain. For https://example.com, it must be accessible at https://example.com/robots.txt. Subdomains need their own robots.txt file — blog.example.com cannot use the robots.txt from example.com.

Can robots.txt block Google?

Robots.txt can prevent Googlebot from crawling specific pages, but it does not guarantee those pages won't appear in search results. If other pages link to a disallowed URL, Google may still index the URL (without crawling its content). For true de-indexing, use the noindex meta tag or X-Robots-Tag HTTP header instead.

What is the crawl-delay directive?

The Crawl-delay directive tells crawlers to wait a specified number of seconds between requests. This can reduce server load from aggressive bots. Note that Google does not honor crawl-delay — use Google Search Console's crawl rate setting instead. Bing and other crawlers do support it.

Related tools: Meta Tag Generator · URL Slug Generator