Optional
fields: anyPenalizes repeated tokens according to frequency
Model name to use
Model name to use
Alias for model
Number of completions to generate for each prompt
Penalizes repeated tokens
Whether or not to include token usage data in streamed chunks.
Whether to stream the results or not. Enabling disables tokenUsage reporting
Sampling temperature to use
Total probability mass of tokens to consider at each step
Optional
apiAPI key to use when making requests to OpenAI. Defaults to the value of
OPENAI_API_KEY
environment variable.
Optional
azureADTokenA function that returns an access token for Microsoft Entra (formerly known as Azure Active Directory), which will be invoked on every request.
Optional
azureAzure OpenAI API deployment name to use for completions when making requests to Azure OpenAI. This is the name of the deployment you created in the Azure portal. e.g. "my-openai-deployment" this will be used in the endpoint URL: https://{InstanceName}.openai.azure.com/openai/deployments/my-openai-deployment/
Optional
azureAzure OpenAI API instance name to use when making requests to Azure OpenAI. this is the name of the instance you created in the Azure portal. e.g. "my-openai-instance" this will be used in the endpoint URL: https://my-openai-instance.openai.azure.com/openai/deployments/{DeploymentName}/
Optional
azureAPI key to use when making requests to Azure OpenAI.
Optional
azureAPI version to use when making requests to Azure OpenAI.
Optional
azureCustom endpoint for Azure OpenAI API. This is useful in case you have a deployment in another region. e.g. setting this value to "https://westeurope.api.cognitive.microsoft.com/openai/deployments" will be result in the endpoint URL: https://westeurope.api.cognitive.microsoft.com/openai/deployments/{DeploymentName}/
Optional
logitDictionary used to adjust the probability of specific tokens being generated
Optional
logprobsWhether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
Optional
maxMaximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.
Optional
modelHolds any additional parameters that are valid to pass to openai.createCompletion
that are not explicitly specified on this class.
Optional
openAIApiAPI key to use when making requests to OpenAI. Defaults to the value of
OPENAI_API_KEY
environment variable.
Alias for apiKey
Optional
organizationOptional
stopList of stop words to use when generating
Alias for stopSequences
Optional
stopList of stop words to use when generating
Optional
supportsWhether the model supports the strict
argument when passing in tools.
If undefined
the strict
argument will not be passed to OpenAI.
Optional
timeoutTimeout to use when making requests to OpenAI.
Optional
topAn integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Optional
userUnique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
Protected
clientProtected
clientOptional
kwargs: Partial<ChatOpenAICallOptions>Calls the OpenAI API with retry logic in case of failures.
The request to send to the OpenAI API.
Optional
options: OpenAICoreRequestOptionsOptional configuration for the API call.
The response from the OpenAI API.
Optional
options: OpenAICoreRequestOptionsGet the identifying parameters for the model
Get the parameters used to invoke the model
Optional
options: unknownOptional
extra: { Optional
streaming?: booleanOptional
config: ChatOpenAIStructuredOutputMethodOptions<false>Optional
config: ChatOpenAIStructuredOutputMethodOptions<true>
OpenAI chat model integration.
Setup: Install
@langchain/openai
and set environment variableOPENAI_API_KEY
.Key args
Key init args — completion params:
Param: model
Name of OpenAI model to use.
Param: temperature
Sampling temperature.
Param: maxTokens
Max number of tokens to generate.
Param: logprobs
Whether to return logprobs.
Param: stream_options
Configure streaming outputs, like whether to return token usage when streaming (
{ include_usage: true }
).Key init args — client params:
Param: timeout
Timeout for requests.
Param: maxRetries
Max number of retries.
Param: apiKey
OpenAI API key. If not passed in will be read from env var OPENAI_API_KEY.
Param: baseUrl
Base URL for API requests. Only specify if using a proxy or service emulator.
Param: organization
OpenAI organization ID. If not passed in will be read from env var OPENAI_ORG_ID.
Key bind args:
Param: tools
Tools to bind to the model.
Param: tool_choice
Specify how and/or which tool the model should invoke.
Param: promptIndex
Param: response_format
The format the model should respond in.
Param: seed
Seed for reproducibility.
Param: stream_options
Additional options to pass to streamed completions. If provided takes precedence over "streamUsage" set at initialization time.
Param: stream_options.include_usage
Whether or not to include token usage in the stream. If set to
true
, this will include an additional chunk at the end of the stream with the token usage.Param: parallel_tool_calls
Whether or not to restrict the ability to call multiple tools in one response.
Param: strict
Whether or not to force the model to return structured output which exactly matches the schema.
See full list of supported init args and their descriptions in the params section.
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example
Example