The 5 Laws of LLM-Assisted Working
These laws establish a framework for integrating LLMs (Large Language Models - AI systems that understand and generate human-like text) into professional workflows while maintaining high standards of quality, accountability, and ethical conduct. They promote the strategic use of AI as a collaborative tool while emphasizing the irreplaceable value of human expertise and judgment in the workplace.
Professionals are free to select any large language model that best suits their specific tasks and goals. This flexibility enables customization, leverages diverse strengths of LLMs, and supports unique workflows across different fields of work.
All outputs generated by an LLM must be thoroughly understood and validated by the professional. Blindly adopting AI-generated suggestions without critical evaluation is prohibited. Professionals are responsible for ensuring the accuracy, relevance, and alignment of outputs with the specific context of their work.
The integration of LLMs must complement, not replace, human expertise. While LLMs can assist in generating ideas, solving problems, or streamlining workflows, the ultimate responsibility for decisions, refinements, and implementation lies with the human professional.
Professionals must actively contribute to improving LLM-assisted workflows by providing feedback on the outputs, identifying areas for enhancement, and sharing best practices. This iterative approach ensures the effective and responsible use of LLMs across diverse applications.
The use of LLMs in work must adhere to ethical principles and align with the values and standards of the field. Professionals must critically evaluate LLM outputs to avoid perpetuating biases, ensuring fairness, accountability, and compliance with industry norms and regulations.