- Published on
An AI haters guide to code with LLMs (The How-to) | All Confirmation Bias, All The Time
Build Guard-rails
Add more tests. Get the LLM to write tests, and to suggest tests to write. The effort required for testing is entirely different now: we have to read the tests and think about them just as much but that’s most of what we have to do. The actual implementation of them can be mostly mechanical with just verification that they actually assert what they test as most of the review.