A CLI tool that keeps your AI custom instructions in sync with living knowledge sources.
You maintain instruction files for your AI tools. Some of those instructions encode knowledge that changes over time: writing style rules, API conventions, best practices. This tool monitors the external sources where that knowledge lives, tracks when they change, and tells you what's new, what's stale, and what you're missing.
- Tracks URLs as "sources" — web pages, GitHub files, wikis, anything with a URL
- Extracts structured content — pulls out sections by heading, CSS selector, or grabs the full page
- Caches snapshots over time — so you can see what changed between syncs
- Diffs changes — shows you what's new, modified, or removed in each source
- Compares sources against your instruction files — keyword and phrase matching to find gaps in your coverage
- Works with any AI tool — your instruction files are just files; the tool doesn't care whether they're for Claude, Copilot, ChatGPT, or something that doesn't exist yet
pip install living-instructions
Or install from source:
git clone https://github.com/dvelton/living-instructions.git
cd living-instructions
pip install -e .
# Create config directory
living-instructions init
# Add a source to track
living-instructions source add \
--name wiki-ai-writing \
--url "https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing" \
--extract sections \
--sections "Content" \
--sections "Language and grammar" \
--sections "Style" \
--schedule weekly
# Add your instruction file as a target
living-instructions target add \
--name my-instructions \
--path ~/.copilot/copilot-instructions.md \
--format markdown
# Fetch the source
living-instructions sync
# See what's in it
living-instructions status
# Compare source content against your instructions
living-instructions reportCreates the config directory (~/.living-instructions/) and an empty config file.
Adds a URL to track. Options:
--name— short identifier (e.g., "wiki-ai-writing")--url— the URL to fetch--extract— how to extract content:full(default),sections(by heading text), orcss(by CSS selector)--sections— heading names to extract (repeatable, used with--extract sections)--selector— CSS selector (used with--extract css)--schedule— how often you intend to sync:daily,weekly(default),monthly, ormanual
The schedule is metadata for your reference. The tool doesn't run on a timer by itself; you run sync when you want to (or set up a cron job).
Shows all configured sources with their sync status.
Removes a source.
Adds an instruction file to analyze. Options:
--name— short identifier--path— path to the instruction file (supports~)--format—markdown(default) orplaintext
Shows all configured targets and whether they exist on disk.
Removes a target.
Fetches all sources (or a specific one) and caches the content. Each sync creates a timestamped snapshot, so you build up a history of how the source has changed over time.
Compares the two most recent snapshots for each source and shows what changed: new sections, removed sections, and a character-level diff of modified sections.
Shows a table of all sources and targets with their current state: last sync time, number of sections extracted, number of snapshots cached.
The main event. Compares all synced source content against your instruction files and produces a gap analysis:
- Covered — source sections that have keyword or phrase overlap with your instructions, with a coverage percentage
- Gaps — source sections with no meaningful overlap (things you might want to add)
- Your instructions only — sections in your instruction file that don't correspond to any source (your original content that the tool won't touch)
The tool extracts keywords and phrases from both your sources and your instruction files, then computes overlap. It's not semantic search — it's keyword matching, which means:
- High coverage scores indicate strong topical overlap
- Low scores might mean you cover the topic differently (using different terminology) or don't cover it at all
- Phrase matching catches specific terms (like "em dash" or "smart quotes") that keyword matching alone might miss
The analysis is a starting point for human review, not an automated rewrite engine.
Everything lives in ~/.living-instructions/config.yaml. You can edit it directly if you prefer:
sources:
- name: wiki-ai-writing
url: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
extract:
method: sections
sections:
- Content
- Language and grammar
- Style
schedule: weekly
targets:
- name: my-instructions
path: ~/.copilot/copilot-instructions.md
format: markdownSee the examples/ directory for sample configs:
ai-writing-patterns.yaml— tracks Wikipedia's AI writing guide and the Humanizer project, compares against Claude and Copilot instruction filescompany-docs.yaml— tracks a company style guide and API changelog
The tool doesn't include a scheduler. Use your system's scheduler:
macOS (launchd):
# Add to crontab
crontab -e
# Sync weekly on Monday at 9am
0 9 * * 1 /path/to/living-instructions sync >> ~/.living-instructions/sync.log 2>&1Linux (cron):
0 9 * * 1 living-instructions sync >> ~/.living-instructions/sync.log 2>&1After syncing, run living-instructions diff to see what changed, or living-instructions report to see how your instructions compare.
The section extraction is designed to work with common HTML and Markdown patterns:
- Wikipedia wraps headings in
<div class="mw-heading">containers. The extractor detects this and walks siblings of the container div. - GitHub READMEs served as raw markdown are parsed by heading level.
- Generic HTML pages are parsed by
<h1>through<h6>tags. - CSS selector mode lets you target any element structure.
If a page has an unusual structure, use full extraction to grab everything, or css with a selector tailored to the page.
MIT