Fast Feedback Loops for AI Development
Fast feedback loops with AI are extremely important. The idea is to develop a framework to work in a way that you get a feeling of how things are going and iterate quickly, changing direction if needed. To do so, it’s important to have a good mental model; therefore, here is my framework: plan - generate - check - adjust.
You start by understanding the problem very well, understanding why you are doing something, and being able to define success criteria and how you want to do this. All this information is the input to the AI in the generate step. An initial version is generated, and the check phase starts, where a feedback loop takes place.
The acceptance criteria or definitions of success are set in front of validations that can be made to determine if the work meets the strict requirements. This can be with a human in the loop or with autonomous loops. The adjustment phase will make sure everything is in the right place.
This practice comes from the old world of TDS, where the idea is the same: having fast feedback loops. In practice, this means building minimal, testable increments and validating them quickly, so you can change direction before investing heavily.
When working with AI, it is crucial to design the checks you will run — automated tests, human reviews, or hybrid validations. These checks define what success looks like and act as the guardrails for your iterations.
Finally, embrace the tension between automation and human oversight. Fully autonomous systems can be powerful but brittle; human-in-the-loop approaches are safer and often more practical during early iterations. The right balance depends on the task’s risk and the available validation mechanisms.
This framework helps teams move faster while keeping control over quality and direction. Start small, iterate quickly, and make checking an explicit part of your development rhythm.