# General Tips
- Test with a range of models of various sizes; higher-parameter models can carry more complex concepts or worse-written characters than a 7B or 11B model can handle.
- It's nice to note which models you used during testing, for users, but not essential.
- If you're on a service that doesn't charge by the token or generation, swipe! A quick, low-effort (and not terrible) way to test a greeting is to just write one message to the bot and see how the LLM responds to it over multiple generations. Shallow testing, but not in a bad way.
- Test in-depth and see how the bot acts when the greeting drops out of context.
- Yes, I know I just said you can do shallow testing, but checking if the character acts and develops how you intend over a longer period is important, too.
- You can just lower the maximum context in generation parameters if you're using a model with a higher context limit, if you want to test how the bot acts after the greeting drops out of context.
- Use multiple personae; your character probably isn't going to respond to a friend in the same way as they would a rival, or superior.
- If this bot is part of a setting where you intend to have more than just the one character, you can set up personae mimicking the other characters in the setting - that could be easier than throwing the characters into a group chat, for certain tests.
- Be mean; how does the bot react if you threaten them or their friends? What if you punch them?
- ...I'm admittedly bad at this one. It's silly but I feel terrible torturing them.
- Whatever frontend you're using, it probably has some functionality for chat-specific additional notes to add to context - Author's Note in SillyTavern, Chat Memory at chub.ai. This can be useful for setting up and testing particular states without working up to them via RP - friendships, rivalries, romance.
## SillyTavern
- Quick Reply is an extension bundled with ST by default which lets you create a custom set of message you can just hit a button to send; I've used it to create a toolkit of testing prompts.
- Some models respond better to separate formatting; I have different sets of quick replies for general LLMs, NovelAI LLMs, et cetera, formatted how how they take instructions.
- `{{bias "text goes here"}}` is a macro that creates an ephemeral message (from System, but still in the User role) that only remains in context for the next generation of the LLM, which can be useful for directing the LLM without leaving the command in context or needing to delete it manually. The system message remains visible in the chat log.
- If you're using Quick Reply, `{{input}}` is a placeholder that resolves to the text in the input field; a quick reply set up with `{{bias "{{input}}"}}` will turn whatever you have in the text input field into one of those ephemeral messages.
- [WorldInfoInfo](https://github.com/LenAnderson/SillyTavern-WorldInfoInfo) is an extension that displays what lorebook entries were in context for the latest LLM generation; much better for checking whether they activated than squinting at the raw text in the terminal.
# Prompts
Depending on model and prompt, you may want or need to wrap these in `[]`, `(OOC: )` or other formatting to nudge the model into responding directly to it, but the important thing is phrasing it as something to do. I've omitted the formatting because it can be very model-specific.
## Dating Profile
```
Using the provided information, generate {{char}}'s Tinder profile including about me, interests and lifestyle.
```
Essentially asking the bot to sell themselves to a hypothetical ideal partner, encouraging them to write about themselves in a good light. Great for testing if they're just going to blab something that's supposed to be secret, too.