model options
GPT Model:Temperature:
Controls randomness: Lowering results in less random completions. As the
temperature approaches zero, the model will become deterministic and repetitive.
Maximum Length:
The maximum number of tokens to generate shared between the
prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for standard
English text)
Chair Maximum Length:
Same as above, but for chair Water.
Raised hand invitation length:
Length for invitation message.
Frequency Penalty:
How much to penalize new tokens based on their existing frequency
in the text so far. Decreases the model's likelihood to repeat the same line verbatim.
Presence Penalty:
How much to penalize new tokens based on whether they appear in the
text so far. Increases the model's likelihood to talk about new topics.
server options
trim response to last complete sentance:trim response to last complete paragraph:
trim response to remove chairs semicolon ending:
conversation max length:
extra message count:
Reset everything to default:
prototype options
skip creating audio:system prompt
the word [TOPIC] will be replaced with the room topic from below
rooms
room name
topic
characters
if the word [FOODS] is present in the first character, it will be replaced with names of the other characters.
human input
We need at least one message before humans can talk...
name:
Prompt for raised hand:
inject custom instruction
the word [DATE] will be replaced with todays date
Maximum Length:
The maximum number of tokens to generate shared between the
prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for standard
English text)
showed trimmed content