southwindcg 16 hours ago

My friends and I had a very simple version of this up and running perhaps fifteen years ago, where we fed-and-scraped Cleverbot[0] and had a second bot posting its replies in our chat. Obviously it was drastically inferior to state-of-the-art LLMs, but it was still amusing. It would only 'speak' when addressed by name, so we weren't flooding Cleverbot with input. Of course, it only ever had one line of context at a time.

[0] https://www.cleverbot.com/

entrepy123 a day ago

Probably a confluence of reasons. Maybe:

1. It is much more profitable to introduce new features very, very slowly.

2. If everyone's doing the same thing... one has got to wonder what the people running those companies are up to. My gut says most of the founders/boards are probably largely all on the same WhatsApp/Signal group chat(s), and feel pressure to follow a certain groupthink.

3. It's much easier to profile individual users when the signals are nice and clean. I suspect modeling individual users (building digital twins) could be a really big and mostly quiet part of the longer play for these companies. That pristine initial data might be pretty nice to have.

4. Maybe it would be really boring or doesn't test well. A good thing about talking to the computer is the lack of having to deal with pesky other humans and all the issues they have. Many people instinctively despise reading text generated by a computer that SOME OTHER HUMAN PROMPTED IT TO WRITE (with some exceptions, of course). This might be called the "default conciseness" problem.

Nothing stops one from hosting their own LLM, hooking a web UI to it in such a way that multiple users can access it. Or using a commercial/networked API to do that.

Not a bad idea to try, really. Maybe you're the first one who thought of it...