The 5 Laws of LLM-Assisted Coding

V0.10, 06/01/2025

These laws provide a framework for integrating LLMs (Large Language Models - AI systems that can understand, generate, and assist with code) into your coding workflow while maintaining high standards of code quality, security, and developer understanding. They encourage the use of AI as a powerful tool while emphasizing the critical role of human expertise and oversight in the software development process.

1. Freedom of LLM Choice

Developers are free to use any large language model of their choice for code generation. This allows for flexibility and leverages individual preferences and strengths of different LLMs.

2. Comprehension Mandate

All code generated with the assistance of an LLM must be thoroughly understood and validated by the developer (tester, architect, etc.). Developers are encouraged to document their understanding to ensure traceability and accountability. Simply copying and pasting without comprehension is strictly prohibited.

3. Human-AI Collaboration in Review

Final code review and publication must involve human oversight, complemented by automated tools for quality and security analysis. Reviewers may use LLMs to assist in the review process, but the ultimate decision and responsibility lie with the human reviewer.

4. Continuous Learning and Improvement

Developers and reviewers must actively contribute to improving the LLM-assisted coding process by providing feedback, identifying areas for improvement, and sharing best practices.

5. Ethical and Secure Coding Standards

All code, whether LLM-generated or not, must adhere to the organization's ethical guidelines and security standards. LLMs should be used to enhance, not compromise, code quality and security.