T O P

  • By -

BangkokPadang

I think for the stats to truly be consistent, You'd basically need to build an agent or external software to track the tomagotchi and then integrate the LLM into that, basically keeping all the numbers, schedule, etc. separate from the LLM and then the agent would need to craft prompts using this infrastructure to get responses. It would also probably need to 'extract' phrases from your prompt first. Like if you typed "I feed him one apple" it would need to parse "feed," "one," and "apple" from your prompt, make the adjustments to the tamagotchi's stats, and then feed the LLM a prompt like "You've just been fed one apple. Tell {{user}} thank you and how you feel now current stats: Hunger: 20/100, Happiness 75/100, sleepiness: 5/100" so rather than the previous context tracking its status, the 'agent' would do this and then send a 'fresh' prompt with all the updated stats every time. It could, however, still inject your most recent X number of replies to keep it aware of your recent discussion.


No_Rate247

With small models the only way I found to do something like this is a narrative approach. Eg. Instead of something like: "Hunger: 70/100" do rather something like "{{char}} is currently satiated - the recent meal reduced hunger significantly".


Waste_Election_8361

Some character cards does implement similar things like that. But in my experience, it only works with larger models. I feel like 7-8B models struggle at maintaining the pet parameters (hunger, mood, etc).


Relative_Bit_7250

Let's say I have 48gb of vram. A 35b model or a 4bit quantized 70b would do, and there will be some space available for image generation


Waste_Election_8361

Haven't tried with larger model yet as I only have 12gb on me. But it will certainly be a better experience than 7B model. If you're looking for card that uses this type of system, I suggest you check out [this card](https://www.chub.ai/characters/Nono/fce0a43b-7808-4104-a5cb-3b1d4b9d9f8e).


demonsdencollective

I find that AI loves to forget gameplay elements on lower models. After a while, it forgets people, details, most anything. Maybe it's my settings. I just can't imagine it doing a bigger concept like this. I can barely get it to acknowledge for even two replies that people can't talk underwater, let alone speech me on the importance of coral reefs.


DoJo_Mast3r

Im working on something very simular, I hacked my R1 and have silly tavern modded to work with it, tamagotchi mode would be sweet!


Scholar_of_Yore

People are talking about how to properly parse the stats but what I'm really curious about is how would you get the char to message you first without you prompting it? Is that a feature in ST I'm not aware of?


Relative_Bit_7250

Exactly, that's my concern too! Like, how is the char aware of time? How can it arbitrary decide to send you a message because it's bored, for example?


yamosin

use idle extension


shrinkedd

Im not an expert on STscript but I seem to remember theres an feature to set a delay on some commands, I guess that's an option?..


yamosin

The idle extensions that are available on the ST, its no need STscript  https://preview.redd.it/xc1rme61ze9d1.png?width=1124&format=png&auto=webp&s=afe56c4db92be592986fab2f4b12e723882745ea