# Contributing to Spec Kit Hi there! We're thrilled that you'd like to contribute to Spec Kit. Contributions to this project are [released](https://help.github.com/articles/github-terms-of-service/#6-contributions-under-repository-license) to the public under the [project's open source license](LICENSE). Please note that this project is released with a [Contributor Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms. ## Prerequisites for running and testing code These are one time installations required to be able to test your changes locally as part of the pull request (PR) submission process. 1. Install [Python 3.11+](https://www.python.org/downloads/) 1. Install [uv](https://docs.astral.sh/uv/) for package management 1. Install [Git](https://git-scm.com/downloads) 1. Have an [AI coding agent available](README.md#-supported-ai-agents)
💡 Hint if you are using VSCode or GitHub Codespaces as your IDE
Provided you have [Docker](https://docker.com) installed on your machine, you can leverage [Dev Containers](https://containers.dev) through this [VSCode extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers), to easily set up your development environment, with aforementioned tools already installed and configured, thanks to the `.devcontainer/devcontainer.json` file (located at the root of the project). To do so, simply: - Checkout the repo - Open it with VSCode - Open the [Command Palette](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette) and select "Dev Containers: Open Folder in Container..." On [GitHub Codespaces](https://github.com/features/codespaces) it's even simpler, as it leverages the `.devcontainer/devcontainer.json` automatically upon opening the codespace.
## Submitting a pull request > [!NOTE] > If your pull request introduces a large change that materially impacts the work of the CLI or the rest of the repository (e.g., you're introducing new templates, arguments, or otherwise major changes), make sure that it was **discussed and agreed upon** by the project maintainers. Pull requests with large changes that did not have a prior conversation and agreement will be closed. 1. Fork and clone the repository 1. Configure and install the dependencies: `uv sync --extra test` 1. Make sure the CLI works on your machine: `uv run specify --help` 1. Create a new branch: `git checkout -b my-branch-name` 1. Make your change, add tests, and make sure everything still works 1. Test the CLI functionality with a sample project if relevant 1. Push to your fork and submit a pull request 1. Wait for your pull request to be reviewed and merged. Activate the project virtual environment (see [Testing setup](#testing-setup) below), then install the CLI from your working tree (`uv pip install -e .` after `uv sync --extra test`) or otherwise ensure the shell uses the local `specify` binary before running the manual slash-command tests described below. Here are a few things you can do that will increase the likelihood of your pull request being accepted: - Follow the project's coding conventions. - Write tests for new functionality. - Update documentation (`README.md`, `spec-driven.md`) if your changes affect user-facing features. - Keep your change as focused as possible. If there are multiple changes you would like to make that are not dependent upon each other, consider submitting them as separate pull requests. - Write a [good commit message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html). - Test your changes with the Spec-Driven Development workflow to ensure compatibility. ## Development workflow When working on spec-kit: 1. Test changes with the `specify` CLI commands (`/speckit.specify`, `/speckit.plan`, `/speckit.tasks`) in your coding agent of choice 2. Verify templates are working correctly in `templates/` directory 3. Test script functionality in the `scripts/` directory 4. Ensure memory files (`memory/constitution.md`) are updated if major process changes are made ### Recommended validation flow For the smoothest review experience, validate changes in this order: 1. **Run focused automated checks first** — use the quick verification commands [below](#automated-checks) to catch scaffolding and configuration regressions early. 2. **Run manual workflow tests second** — if your change affects slash commands or the developer workflow, follow the [manual testing](#manual-testing) section to choose the right commands, run them in an agent, and capture results for your PR. ### Automated checks #### Agent configuration and wiring consistency ```bash uv run python -m pytest tests/test_agent_config_consistency.py -q ``` Run this when you change agent metadata, context update scripts, or integration wiring. ### Manual testing #### Testing setup ```bash # Install the project and test dependencies from your local branch cd uv sync --extra test source .venv/bin/activate # On Windows (CMD): .venv\Scripts\activate | (PowerShell): .venv\Scripts\Activate.ps1 uv pip install -e . # Ensure the `specify` binary in this environment points at your working tree so the agent runs the branch you're testing. # Initialize a test project using your local changes uv run specify init /speckit-test --ai --offline cd /speckit-test # Open in your agent ``` #### Manual testing process Any change that affects a slash command's behavior requires manually testing that command through an AI agent and submitting results with the PR. 1. **Identify affected commands** — use the [prompt below](#determining-which-tests-to-run) to have your agent analyze your changed files and determine which commands need testing. 2. **Set up a test project** — scaffold from your local branch (see [Testing setup](#testing-setup)). 3. **Run each affected command** — invoke it in your agent, verify it completes successfully, and confirm it produces the expected output (files created, scripts executed, artifacts populated). 4. **Run prerequisites first** — commands that depend on earlier commands (e.g., `/speckit.tasks` requires `/speckit.plan` which requires `/speckit.specify`) must be run in order. 5. **Report results** — paste the [reporting template](#reporting-results) into your PR with pass/fail for each command tested. #### Reporting results Paste this into your PR: ~~~markdown ## Manual test results **Agent**: [e.g., GitHub Copilot in VS Code] | **OS/Shell**: [e.g., macOS/zsh] | Command tested | Notes | |----------------|-------| | `/speckit.command` | | ~~~ #### Determining which tests to run Copy this prompt into your agent. Include the agent's response (selected tests plus a brief explanation of the mapping) in your PR. ~~~text Read CONTRIBUTING.md, then run `git diff --name-only main` to get my changed files. For each changed file, determine which slash commands it affects by reading the command templates in templates/commands/ to understand what each command invokes. Use these mapping rules: - templates/commands/X.md → the command it defines - scripts/bash/Y.sh or scripts/powershell/Y.ps1 → every command that invokes that script (grep templates/commands/ for the script name). Also check transitive dependencies: if the changed script is sourced by other scripts (e.g., common.sh is sourced by create-new-feature.sh, check-prerequisites.sh, setup-plan.sh, update-agent-context.sh), then every command invoking those downstream scripts is also affected - templates/Z-template.md → every command that consumes that template during execution - src/specify_cli/*.py → CLI commands (`specify init`, `specify check`, `specify extension *`, `specify preset *`); test the affected CLI command and, for init/scaffolding changes, at minimum test /speckit.specify - extensions/X/commands/* → the extension command it defines - extensions/X/scripts/* → every extension command that invokes that script - extensions/X/extension.yml or config-template.yml → every command in that extension. Also check if the manifest defines hooks (look for `hooks:` entries like `before_specify`, `after_implement`, etc.) — if so, the core commands those hooks attach to are also affected - presets/*/* → test preset scaffolding via `specify init` with the preset - pyproject.toml → packaging/bundling; test `specify init` and verify bundled assets Include prerequisite tests (e.g., T5 requires T3 requires T1). Output in this format: ### Test selection reasoning | Changed file | Affects | Test | Why | |---|---|---|---| | (path) | (command) | T# | (reason) | ### Required tests Number each test sequentially (T1, T2, ...). List prerequisite tests first. - T1: /speckit.command — (reason) - T2: /speckit.command — (reason) ~~~ ## AI contributions in Spec Kit > [!IMPORTANT] > > If you are using **any kind of AI assistance** to contribute to Spec Kit, > it must be disclosed in the pull request or issue. We welcome and encourage the use of AI tools to help improve Spec Kit! Many valuable contributions have been enhanced with AI assistance for code generation, issue detection, and feature definition. That being said, if you are using any kind of AI assistance (e.g., agents, ChatGPT) while contributing to Spec Kit, **this must be disclosed in the pull request or issue**, along with the extent to which AI assistance was used (e.g., documentation comments vs. code generation). If your PR responses or comments are being generated by an AI, disclose that as well. As an exception, trivial spacing or typo fixes don't need to be disclosed, so long as the changes are limited to small parts of the code or short phrases. An example disclosure: > This PR was written primarily by GitHub Copilot. Or a more detailed disclosure: > I consulted ChatGPT to understand the codebase but the solution > was fully authored manually by myself. Failure to disclose this is first and foremost rude to the human operators on the other end of the pull request, but it also makes it difficult to determine how much scrutiny to apply to the contribution. In a perfect world, AI assistance would produce equal or higher quality work than any human. That isn't the world we live in today, and in most cases where human supervision or expertise is not in the loop, it's generating code that cannot be reasonably maintained or evolved. ### What we're looking for When submitting AI-assisted contributions, please ensure they include: - **Clear disclosure of AI use** - You are transparent about AI use and degree to which you're using it for the contribution - **Human understanding and testing** - You've personally tested the changes and understand what they do - **Clear rationale** - You can explain why the change is needed and how it fits within Spec Kit's goals - **Concrete evidence** - Include test cases, scenarios, or examples that demonstrate the improvement - **Your own analysis** - Share your thoughts on the end-to-end developer experience ### What we'll close We reserve the right to close contributions that appear to be: - Untested changes submitted without verification - Generic suggestions that don't address specific Spec Kit needs - Bulk submissions that show no human review or understanding ### Guidelines for success The key is demonstrating that you understand and have validated your proposed changes. If a maintainer can easily tell that a contribution was generated entirely by AI without human input or testing, it likely needs more work before submission. Contributors who consistently submit low-effort AI-generated changes may be restricted from further contributions at the maintainers' discretion. Please be respectful to maintainers and disclose AI assistance. ## Resources - [Spec-Driven Development Methodology](./spec-driven.md) - [How to Contribute to Open Source](https://opensource.guide/how-to-contribute/) - [Using Pull Requests](https://help.github.com/articles/about-pull-requests/) - [GitHub Help](https://help.github.com)