Generate Your robots.txt File
Configure the rules below to generate a `robots.txt` file. This file tells search engine bots which pages or files they can or cannot request from your site.
Default Rules (for all bots: User-agent: *)
Specific Bot Rules (Optional)
Sitemap Location (Optional)
How to Use
- Choose the default access rule for all crawlers (Allow All or Disallow All).
- Optionally, click "Add Specific Bot Rule" to define rules for specific user-agents (e.g., `Googlebot`, `Bingbot`).
- For each specific bot rule, enter the User-agent name and add `Allow:` or `Disallow:` paths (one per line).
- Optionally, enter the full URL to your XML sitemap.
- The content for your `robots.txt` file will be generated automatically in the output area.
- Click "Copy Content" or "Download robots.txt" (save the downloaded file as `robots.txt` in your website's root directory).
Important: `robots.txt` is a guideline, not strictly enforced by all crawlers. Malicious bots will likely ignore it. Use it primarily to guide well-behaved crawlers like Googlebot.
Why Use a robots.txt File?
- Control Crawling: Guide search engine bots away from private areas, scripts, or duplicate content pages.
- Manage Server Load: Prevent bots from overwhelming your server by crawling unimportant sections too frequently.
- SEO: Ensure bots focus their crawl budget on your important, indexable content.
- Sitemap Discovery: Point bots directly to your XML sitemap location.