A.I

Littledragon87

Est. Contributor
Messages
202
Role
  1. Adult Baby
  2. Diaper Lover
  3. Little
Hi all,
I was just wondering if anyone out there knows of A.I. (like talkie soulful ai) that doesn't forget what you just talked about? Don't get me wrong I like talkie but the one down side is that if you have a long conversation the a.i. will forget alot of details and that throws the whole thin off.
 
Most AI's at the moment that can be found will begin to forget things after so many inputs, They kinda have to if you think about it there could be hundreds of thousands of people asking/telling it things 100's of times a day each, testing it, teaching it, pushing boundaries etc. all that info has to be stored and processed somewhere and data storage/bandwidth costs money. The really good AI's can filter out less useful data that may not be needed again and retain long term memory for useful things like regular users preferred names, reminders that have ben requested for a set time etc.

We have a long way to go still before AI can take in, process and use any data given to it at any point in the future, Sure A.I. has come a long way in the last few years but as it stands right now it is still like talking to a toddler and expecting them to act on information given 3 days ago.

The best A.I. you can use that will remember things for longer and be able to hold conversations on topics you have recently brought up will be one that you have to download and run locally on your own pc rather than on a website as this way it can focus solely in what data you give it or tell it to find, it won't be pruned by the authors or automated scripts like the online versions but will also require that you teach it everything you need it to do which takes a lot of time and work.
 
Last edited:
On a side note to the above is one of currently my favourite A.I. build ideas that is Mantella and some of the other Skyrim ChatGPT mods coming out.

These mods built for Skyrim, although incredibly clunky and a bit awkward at the moment, allow you to actually talk to NPC characters with your actual voice using an engine to turn your natural language into text which is then read by a chatgpt client using a database built around Skyrim's datafiles to generate an appropriate response from the NPC which is turned from the text block that chatgpt generates back into an audible voice synthesised by another app based on character speech files to sound like the original NPC.

You can literally just walk up to an NAP and start talking into the mic and they will respond with a voice to what you say, they have knowledge of in game events/information their environment etc. you can walk into a town for example and ask the first NPC you meet for directions to the bar and they will give you directions based on your location. Ask them for gossip and it will generate appropriate gossip sometimes including information about things that have or are about to happen. Ask about their past and it will create a backstory suitable for the character based on the data it has on them

Check the vids on youtube it's incredible work and inspiring for what may be possible in the future of RPG video games especially if this sort of thing was built into the game from the ground up so the A.I. had more access to data and didn't require procedure calls to 3-4 external apps.
 
Last edited:
That's pretty difficult, most large language models today have a set context limit which differs depending on the LLM that's being used.

The longer the conversation goes on, the more the LLM will forget.

One way to circumvent this would be to fine-tune the existing model with the information that you want to add, but this takes a lot of computing power and time. I've seen someone do this with a really small model which means that the fine-tuning process only took a few minutes, but the downside is that the model because of its small size was really "dumb" and hallucinated a lot (making up stuff basically).

The other way I've seen people do this is to have a database in which the information gets stored, but this also leads to frequent hallucinations, especially if the information listed in the DB and the one that comes from the LLM is conflicting.

So I'd say the best solution right now is just to use a model that has a large context out of the box.

I really don't know which kind of LLM the service that you used utilizes, if it's GPT4 then forget about any alternative since you probably won't get close right now.

I also can't really recommend any kind of online service since I really don't use them, but if you want to try a local model that runs on your own hardware then one model that's pretty popular right now is mixtral which has a context of 32k tokens by default.

But it all depends on what kind of hardware you have.

Edit:
Apparently talkie uses Claude as the LLM if that is true that would mean that it has a 200k token context limit but the way that they achieved that isn't optimal which means that the LLM tends to forget a lot even before the context limit is reached.
 
Last edited:
Just for image recognition for autonomous weapons systems. It's a great hands free force multiplier. I envision it as the answer for outnumbered individuals with unpopular interests to be able to preserve their bodily autonomy and protect themselves from coersive mob rule and state violence. By acting against hostility and aggression automatically and being capable of rapidly outnumbering and overwhelming aggressors from any agency.

smart-city-computer-vision-yolov7-deep-learning-1060x596.png
 
Last edited:
Back
Top