Let’s revisit the concept of Large Language Models (LLMs) and backtracking. I’d like to explore how we can leverage LLMs to generate code. By restricting the types of tokens the LLM can generate, we can force it to produce valid queries.

But why stop there? On top of verifying the correctness of the code, we can take it a step further by attempting to compile and run tests on the generated code. If the code fails, we can backtrack, revert the LLM’s state to a previous step, and try again. If all possibilities are exhausted or have a low score, we can backtrack further.

This process mimics real-world programming: try, test, refine, and repeat. By combining LLMs with backtracking, we can create a more robust code generation system.

Backtracking is not a new concept in computer science. It’s a technique used in various algorithms, such as constraint satisfaction problems, puzzle solvers, and optimization problems. However, one of the challenges with backtracking is finding effective heuristics to navigate the solution space.

This is where LLMs can shine. By incorporating the LLM’s sorted list of tokens into the existing heuristic, we can guide the backtracking process to explore more promising areas of the solution space first. This could lead to faster convergence to a solution or even discovering better solutions that would be difficult to find through traditional backtracking methods.

Next - Previous