This is the core agent of queryverify. To fully use it please visit queryverify.com.
IMPORTANT: This agent was open-sourced for educational purposes, specifically for n8n developers who want to take a peek at ready-to-use solutions. It is not easy to setup it for SQL double-checking. And it is only one part of the double-checking process; a lot of things were done on the other services
AI Agent that automatically rechecks AI generated SQL in isolated test environment.
It’s built on N8N, import the isra36_n8nAgent.json file.
Offical page of queryverify agent in n8n
Don't trust complex AI-generated SQL queries without double-checking them in a safe environment. That's where queryverify comes in. It automatically creates a test environment with the necessary data, generates code for your task, runs it to double-check for correctness, and handles errors if necessary. If you enable auto-fixing, queryverify will detect and fix issues on its own. If not, it will ask for your permission before making changes during debugging. In the end, you get thoroughly verified code along with full details about the environment it ran in.
It is an embedded chat for the website, but you can pin input data and run it on your own n8n instance.
sessionId: uuid_v4. Required to handle ongoing conversations and to create table names (used as a prefix).threadId: string | nullable. IfaiProvideris openai, conversation history is managed on OpenAI’s side. This is not needed in the first request—it will start a new conversation. For ongoing conversations, you must provide this value. You can get it from theOpenAIMainBrainnode output after the first run. If you want to start a new conversation, just leave it asnull.apiKey: string. Your API key for the selectedaiProvider.aiProvider: string. Currently supported values: openai, openrouter.model: string. The AI model key (e.g.,gpt-4.1,o3-mini, or any supported model key from OpenRouter).autoErrorFixing: boolean. Iftrue, it will automatically fix errors encountered when running code in the environment. Iffalse, it will ask for your permission before attempting a fix.chatInput: string. The user's prompt or message.currentDbSchemaWithData: string. A JSON representation of the database schema with sample data. Used to inform the AI about the current database structure during an ongoing conversation. Please use the '[]' value in the first request. Example string for filled db structure :'{"users":[{"id":1,"name":"John Doe","email":"john.d@example.com"},{"id":2,"name":"Jane Smith","email":"jane.s@example.com"}],"products":[{"product_id":101,"product_name":"Laptop","price":999.99}]}'
Make sure to fill in your credentials:
- Your OpenAI or OpenRouter API key
- Access to a local PostgreSQL / MySQL database for test execution
You can view your generated tables using your preferred SQL GUI. We recommend DBeaver. Alternatively, you can activate the “Deactivated DB Visualization” nodes below. To use them, connect each to the most recent successful Set node and manually adjust the output. However, the easiest and most efficient method is to use a GUI.
- We store all input values in the
localVariablesnode. Please use this node to get the necessary data. OpenAIhas a built-in assistant that manages chat history on their side. For OpenRouter, we handle chat history locally. That’s why we use separate nodes likeifOpenAiandisOpenAi. Note thatiflogic can also be used inside nodes.- The
AutoErrorFixingloop will run only a limited number of times, as defined by theisMaxAutoErrorReachednode. This prevents infinite loops. - The
Execute_AI_resultnode connects to the SQL test database used to execute queries.
This setup is built for PostgreSQL/MySQL, but it can be adapted to any programming language, and the logic can be extended to any programming framework.
To customize the logic for other programming languages:
- Change
instructionparameter inlocalVariablesnode. - Replace the
Execute_AI_resultSQL node with another executable node. For example, you can use the HTTP Request node. - Update the
GenerateErrorPromptnode'spromptparameter to generate code specific to your target language or framework.
- Make sure that the AI truly understands your complex task. A strategy from Dale Carnegie: don't ask if someone understood you or not; rather, ask them to retell it in their own words. Then, approve or correct according to Ai's retell of your task. Only after this validation, pass the idea to the model for code generation. Separate code generation and understanding the problem. As did cursor in plan mode.
- Connect db schema without data
- Extend support to other programming languages.
- Extend support to programming frameworks.
- Automate writing of test cases.