The 5 Laws of LLM-Assisted Coding
These laws provide a framework for integrating LLMs (Large Language Models - AI systems that can understand, generate, and assist with code) into your coding workflow while maintaining high standards of code quality, security, and developer understanding. They encourage the use of AI as a powerful tool while emphasizing the critical role of human expertise and oversight in the software development process.
Developers are free to use any large language model of their choice for code generation. This allows for flexibility and leverages individual preferences and strengths of different LLMs.
All code generated with the assistance of an LLM must be thoroughly understood and validated by the developer (tester, architect, etc.). Developers are encouraged to document their understanding to ensure traceability and accountability. Simply copying and pasting without comprehension is strictly prohibited.
Final code review and publication must involve human oversight, complemented by automated tools for quality and security analysis. Reviewers may use LLMs to assist in the review process, but the ultimate decision and responsibility lie with the human reviewer.
Developers and reviewers must actively contribute to improving the LLM-assisted coding process by providing feedback, identifying areas for improvement, and sharing best practices.
All code, whether LLM-generated or not, must adhere to the organization's ethical guidelines and security standards. LLMs should be used to enhance, not compromise, code quality and security.