Nice, but can somebody tell me if this performs better than my simple Postgres MCP using npx? My current setup uses the LLM to search through my local Postgres in multiple steps. I guess this Pgmcp is doing multiple steps in the background and returns the final result to the LLM calling the MCP tool?
Shameless plug: I literally built a desktop app that does the exact same thing, but, with any data file you throw at it. CSV, Json, Excel and Parquet... all supported. And processing happens without your files being uploaded to LLM provider.
Nice, but can somebody tell me if this performs better than my simple Postgres MCP using npx? My current setup uses the LLM to search through my local Postgres in multiple steps. I guess this Pgmcp is doing multiple steps in the background and returns the final result to the LLM calling the MCP tool?
Codex: ``` [mcp_servers.postgresMCP] command = "npx" args = ["-y", "@modelcontextprotocol/server-postgres", "postgresql://user:password@localhost:5432/db"] ```
Cursor: ``` "postgresMCP": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-postgres", "postgresql://user:password@localhost:5432/db" ] }, ```
With my setup i can easily switch between LLM's
nice! is there a way for the agent to know about it's own queries / resource usage?
eg the agent could actively monitor memory/cpu/time usage of a query and cancel it if it's taking too long?
How does this protect against lethal trifecta attacks like the ones here - tramlines.io/blog ?
Shameless plug: I literally built a desktop app that does the exact same thing, but, with any data file you throw at it. CSV, Json, Excel and Parquet... all supported. And processing happens without your files being uploaded to LLM provider.
https://zenquery.app
recently posted https://news.ycombinator.com/item?id=43520953
Different project