Boundary one: decide whether AI is an adviser or an executor
A common mistake is to slide from “AI can recommend the next step” into “AI can perform the next step.” That gap is not just a small technical step. It requires a whole responsibility model. In scenarios like ticket classification, purchase anomaly review, or quotation drafting, AI advice is often useful. But directly changing ticket priority, sending a quotation, or turning an anomaly into a downstream business action crosses into execution.
A steadier approach is to classify system actions by risk. Low-risk actions may be semi-automated. Medium-risk actions may require explicit human confirmation. High-risk actions should stay in recommendation mode only. Once that classification is written down, interface design, button language, logging, and permissions become much easier to keep consistent.
Design recommendation output and execution output as separate layers
Do not confuse “the API can do it” with “the workflow should automate it”
Risk classification should come before interface and automation design