Robots.txt 생성기
웹 크롤러를 위한 허용 및 차단 규칙이 있는 robots.txt 파일 생성
Global Parameters
Rule Group #1
# Generated by freetool24.com User-agent: * Disallow: /admin/ Disallow: /private/ Allow: / Sitemap: https://example.com/sitemap.xml
User-agent: *
Applies the rule to all bots and web crawlers globally.
Disallow: /
Prevents the targeted bot from crawling any page on the site.
Free Robots.txt Generator
A robots.txt file is a plain-text file at the root of your website that instructs crawlers — Googlebot, Bingbot, and others — which pages to crawl or skip. This robots.txt generator lets you build valid Allow, Disallow, User-agent, and Sitemap directives visually, then copy the file instantly.
How to create a robots.txt file
- 1
Set your Sitemap URL
Enter your sitemap address, such as https://yoursite.com/sitemap.xml. Search engines use it to discover your most important URLs.
- 2
Choose a User-agent
Use * to target all crawlers, or name a specific bot such as Googlebot, Bingbot, Googlebot-Image, or GPTBot.
- 3
Add Disallow rules
List the paths you want crawlers to skip, such as /admin/, /private/, /cart/, or internal search results.
- 4
Add Allow rules
Use Allow to reopen a specific sub-path inside a blocked folder, such as allowing /admin/help/ while blocking /admin/.
- 5
Copy and deploy
Copy the generated robots.txt file and upload it to the root of your domain: https://yourdomain.com/robots.txt.
Common robots.txt templates
Allow all crawlers
User-agent: * Disallow: — gives search engines full crawl access.
Block a private folder
User-agent: * Disallow: /admin/ — keeps admin or staging sections out of crawler paths.
Add sitemap discovery
Sitemap: https://example.com/sitemap.xml — points crawlers to your canonical sitemap.
Target one bot
User-agent: Googlebot Disallow: /tmp/ — applies the rule only to Googlebot.
Common mistakes to avoid
Blocking the whole site
Disallow: / blocks crawling for the selected bot. Use it only for private or staging sites.
Using robots.txt for secrets
Blocked URLs are still visible in the file. Do not put private paths or sensitive endpoints there as a security measure.
Forgetting the root location
The file must live at /robots.txt on the host it controls. A robots file in a subfolder is ignored.
Confusing crawl and index
Robots.txt controls crawling. Use noindex on accessible pages when you need to prevent indexing.
FAQ
Does robots.txt prevent indexing?
Robots.txt controls crawling, not indexing. A URL can still appear in search results if Google discovers it from links. Use a noindex meta tag to prevent indexing.
Where does robots.txt go?
At the root of your domain — https://yourdomain.com/robots.txt. Subdirectory placement does not work.
Should I include my sitemap in robots.txt?
Yes. A Sitemap directive helps search engines discover your canonical sitemap URL quickly.
Is robots.txt case-sensitive?
Yes. Disallow: /Admin/ and Disallow: /admin/ are treated as different paths by most crawlers.
계속 탐색
추천 SEO 도구 도구…
메타 태그 생성기
Open Graph 및 Twitter Card를 포함한 완전한 SEO용 HTML 메타 태그 생성
키워드 밀도 검사기
콘텐츠의 키워드 빈도 및 밀도 비율 분석
단어 수 세기
단어, 문자, 문장, 예상 읽기 시간 계산
슬러그 생성기
모든 텍스트를 소문자와 하이픈이 있는 URL 친화적인 슬러그로 변환
XML 사이트맵 생성기
더 나은 검색 엔진 색인을 위해 URL 목록에서 XML 사이트맵 생성
Open Graph 미리보기
Facebook, Twitter 등 소셜 미디어에서 공유할 때 페이지 모습 미리보기
SERP 미리보기
Google 검색 결과에서 페이지가 어떻게 표시되는지 미리보기
가독성 검사기
콘텐츠의 Flesch-Kincaid 가독성 점수 및 학년 수준 계산