'If using JSON response format, you must include word "json" in the prompt in your chain or agent. Also, make sure to select latest models released post November 2023.',
name:'notice',
type:'notice',
default:'',
displayOptions:{
show:{
'/options.responseFormat':['json_object'],
},
},
},
{
displayName:'Model',
name:'model',
type:'options',
description:
'The model which will generate the completion. <a href="https://beta.openai.com/docs/models/overview">Learn more</a>.',
'When using non-OpenAI models via "Base URL" override, not all models might be chat-compatible or support other features, like tools calling or JSON response format',
"Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim",
type:'number',
},
{
displayName:'Maximum Number of Tokens',
name:'maxTokens',
default:-1,
description:
'The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 32,768).',
type:'number',
typeOptions:{
maxValue: 32768,
},
},
{
displayName:'Response Format',
name:'responseFormat',
default:'text',
type:'options',
options:[
{
name:'Text',
value:'text',
description:'Regular text response',
},
{
name:'JSON',
value:'json_object',
description:
'Enables JSON mode, which should guarantee the message the model generates is valid JSON',
'Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.',
type:'number',
},
{
displayName:'Timeout',
name:'timeout',
default:60000,
description:'Maximum amount of time a request is allowed to take in milliseconds',
type:'number',
},
{
displayName:'Max Retries',
name:'maxRetries',
default:2,
description:'Maximum number of retries to attempt',
'Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. We generally recommend altering this or temperature but not both.',