Website Copy
Website Copy is a special project page that turns your live marketing site into readable project context for Q. It maps the domain saved in Project Identity, lets you choose which URLs to track, scrapes those pages with Firecrawl, and stores the result as read-only Markdown files inside your project.
Use it when Q needs to understand what your website actually says before suggesting copy changes, positioning updates, or missions for your development workflow.
Setup
- Add your project domain in Project Settings > Identity. The domain can be entered with or without
https://. - Open Page Manager, choose Special, and add Website Copy.
- Click Start URL Identification. Q uses Firecrawl to map the live domain and show candidate URLs.
- Select the URLs you want to track.
- Click Scrape URLs. Each selected page becomes a Markdown file under
/website-copy/.
URL identification requires the website to be live and reachable by Firecrawl. Private, blocked, or local-only pages cannot be scraped.
Costs And Credits
Website Copy scraping uses workspace credits.
| Action | Cost |
|---|---|
| URL identification | No charge, limited to 5 runs per hour |
| Scraping or refreshing a page | $0.01 per successfully scraped page |
Refreshing an existing tracked page also counts as a new scrape and costs $0.01.
What Q Can Read
Tracked pages are saved as Markdown files below /website-copy/. The Sidebar AI can read these files like other project context, so you can ask questions such as:
- "Does our homepage copy match the current roadmap?"
- "What messaging gaps do you see across these landing pages?"
- "Create missions to update stale pricing copy."
Website Copy itself is treated as the source of truth from the live website.
Suggesting Changes
Because changing a scraped file would not change your real website, Website Copy focuses on turning insights into work:
- See Content opens the scraped Markdown page.
- Refresh re-scrapes a tracked URL.
- Open URL opens the live page in a new tab.
- Copy suggestions can be turned into dev missions so your team can update the actual website code.
Troubleshooting
URL identification finds no pages
Check that the project domain is filled in, public, and reachable. If the domain redirects, use the canonical domain in Project Identity.
A page cannot be scraped
Some websites block crawlers, require JavaScript in a way Firecrawl cannot extract, or require authentication. Open the URL in a browser and confirm it is publicly available.
Q says there are not enough credits
Reduce the number of selected URLs or top up workspace credits from billing settings. The modal shows the estimated cost before you start scraping.
Q's answer uses stale website copy
Click Refresh for the affected page, or rerun URL identification if the site structure changed.