Documentation
¶
Index ¶
- Constants
- type Client
- func (c *Client) Model(ctx context.Context, model string) (*ModelResponse, error)
- func (c *Client) Models(ctx context.Context) (*ModelResponse, error)
- func (c *Client) RequestCompletion(ctx context.Context, cr *CompletionRequest) (*CompletionResponse, error)
- func (c *Client) RequestEdits(ctx context.Context, er *EditsRequest) (*EditsResponse, error)
- func (c *Client) RequestEmbedding(ctx context.Context, er *EmbeddingRequest) (*EmbeddingsResponse, error)
- func (c *Client) RequestImageEdits(ctx context.Context, ir *ImageRequest) (*ImageResponse, error)
- func (c *Client) RequestImageVariations(ctx context.Context, ir *ImageRequest) (*ImageResponse, error)
- func (c *Client) RequestImages(ctx context.Context, ir *ImageRequest) (*ImageResponse, error)
- type CompletionRequest
- type CompletionResponse
- type EditsRequest
- type EditsResponse
- type EmbeddingRequest
- type EmbeddingsResponse
- type Error
- type ErrorResponse
- type ImageRequest
- type ImageResponse
- type ModelResponse
Constants ¶
const ( // Davinci is the most capable model family and can perform any task the other models can // perform and often with less instruction. For applications requiring a lot of // understanding of the content, like summarization for a specific audience and // creative content generation, Davinci is going to produce the best results. // These increased capabilities require more compute resources, so Davinci costs // more per API call and is not as fast as the other models. // // Good at: Complex intent, cause and effect, summarization for audience TextDavinci003 = "text-davinci-003" // Curie is extremely powerful, yet very fast. While Davinci is stronger when it // comes to analyzing complicated text, Curie is quite capable for many nuanced // tasks like sentiment classification and summarization. Curie is also quite // good at answering questions and performing Q&A and as a general service chatbot. // // Good at: Language translation, complex classification, text sentiment, summarization TextCurie001 = "text-curie-001" // Babbage can perform straightforward tasks like simple classification. It’s also quite // capable when it comes to Semantic Search ranking how well documents match up with search queries. // // Good at: Moderate classification, semantic search classification TextBabbage001 = "text-babbage-001" // Ada is usually the fastest model and can perform tasks like parsing text, address correction // and certain kinds of classification tasks that don’t require too much nuance. // Ada’s performance can often be improved by providing more context. // // Good at: Parsing text, simple classification, address correction, keywords TextAda001 = "text-ada-001" // Most capable Codex model. Particularly good at translating natural language to code. // In addition to completing code, also supports inserting completions within code. CodexCodeDavinci002 = "code-davinci-002" // Almost as capable as Davinci Codex, but slightly faster. // This speed advantage may make it preferable for real-time applications. CodexCodeCushman001 = "code-cushman-001" TextDavinciEdit001 = "text-davinci-edit-001" TextSimilarityBabbage001 = "text-similarity-babbage-001" )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
func (*Client) Model ¶
Model retrieves a model instance, providing basic information about the model such as the owner and permissioning.
func (*Client) Models ¶
func (c *Client) Models(ctx context.Context) (*ModelResponse, error)
Models lists the currently available models, and provides basic information about each one such as the owner and availability.
func (*Client) RequestCompletion ¶
func (c *Client) RequestCompletion(ctx context.Context, cr *CompletionRequest) (*CompletionResponse, error)
RequestCompletion creates a completion for the provided prompt and parameters
func (*Client) RequestEdits ¶
func (c *Client) RequestEdits(ctx context.Context, er *EditsRequest) (*EditsResponse, error)
func (*Client) RequestEmbedding ¶
func (c *Client) RequestEmbedding(ctx context.Context, er *EmbeddingRequest) (*EmbeddingsResponse, error)
func (*Client) RequestImageEdits ¶
func (c *Client) RequestImageEdits(ctx context.Context, ir *ImageRequest) (*ImageResponse, error)
func (*Client) RequestImageVariations ¶
func (c *Client) RequestImageVariations(ctx context.Context, ir *ImageRequest) (*ImageResponse, error)
func (*Client) RequestImages ¶
func (c *Client) RequestImages(ctx context.Context, ir *ImageRequest) (*ImageResponse, error)
type CompletionRequest ¶
type CompletionRequest struct {
// ID of the model to use. You can use the List models API to see all of your available models,
// or see our Model overview for descriptions of them.
Model string `json:"model"`
// The prompt(s) to generate completions for, encoded as a string, array of strings,
// array of tokens, or array of token arrays.
//
// Note that <|endoftext|> is the document separator that the model sees during training,
// so if a prompt is not specified the model will generate as if from the beginning of a new document.
Prompt string `json:"prompt,omitempty"`
// The suffix that comes after a completion of inserted text.
Suffix string `json:"suffix,omitempty"`
// The maximum number of tokens to generate in the completion.
//
// The token count of your prompt plus max_tokens cannot exceed the model's context length.
// Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
MaxTokens int `json:"max_tokens,omitempty"`
// What sampling temperature to use. Higher values means the model will take more risks.
// Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
//
// It is generally recommended to alter this or `top_p` but not both.
Temperature float64 `json:"temperature,omitempty"`
// An alternative to sampling with temperature, called nucleus sampling,
// where the model considers the results of the tokens with top_p probability mass.
// So 0.1 means only the tokens comprising the top 10% probability mass are considered.
//
// It is generally recommended to alter this or `tempature` but not both.
TopP float64 `json:"top_p,omitempty"`
// How many completions to generate for each prompt.
N int `json:"n,omitempty"`
// Whether to stream back partial progress. If set, tokens will
// be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
Stream bool `json:"stream,omitempty"`
// Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example,
// if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return
//the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.
//
// The maximum value for logprobs is 5. If you need more than this, please contact us through our Help center and describe your use case.
LogProbs int `json:"logprobs,omitempty"`
// Echo back the prompt in addition to the completion
Echo bool `json:"echo,omitempty"`
// Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Stop []string `json:"stop,omitempty"`
// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether
// they appear in the text so far, increasing the model's likelihood to talk about new topics.
PresencePenalty float64 `json:"presence_penalty,omitempty"`
// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency
// in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
// Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token).
// Results cannot be streamed.
//
// When used with n, best_of controls the number of candidate completions and n specifies how many to
// return – best_of must be greater than n.
//
// Note: Because this parameter generates many completions, it can quickly consume your token quota.
// Use carefully and ensure that you have reasonable settings for max_tokens and stop.
BestOf int `json:"best_of,omitempty"`
// Modify the likelihood of specified tokens appearing in the completion.
//
// Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer)
// to an associated bias value from -100 to 100. You can use this tokenizer tool
// (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically,
// the bias is added to the logits generated by the model prior to sampling. The exact effect
// will vary per model, but values between -1 and 1 should decrease or increase likelihood of
// selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
//
// As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.
LogitBias map[string]int `json:"logit_bias,omitempty"`
// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
User string `json:"user,omitempty"`
}
CompletionRequest represents a request for text completion.
type CompletionResponse ¶
type CompletionResponse struct {
Id string `json:"id"`
Object string `json:"object"`
Created int `json:"created"`
Model string `json:"model"`
Choices []choice `json:"choices"`
Usage usage `json:"usage"`
}
CompletionResponse represents a reponse from the Completions v1 endpoint.
type EditsRequest ¶
type EditsRequest struct {
// Model is the ID of the model to use. You can use the List models API to see all
// of your available models, or see our Model overview for descriptions of them.
Model string `json:"model"`
// The input text to use as a starting point for the edit.
Input string `json:"input,omitempty"`
// The instruction that tells the model how to edit the prompt.
Instruction string `json:"instruction"`
// How many edits to generate for the input and instruction.
N int `json:"n,omitempty"`
// What sampling temperature to use. Higher values means the model will take more risks.
// Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
//
// We generally recommend altering this or `top_p` but not both.
Temperature float64 `json:"temperature,omitempty"`
// An alternative to sampling with temperature, called nucleus sampling,
// where the model considers the results of the tokens with top_p probability mass.
// So 0.1 means only the tokens comprising the top 10% probability mass are considered.
//
// We generally recommend altering this or `temperature` but not both.
TopP float64 `json:"top_p,omitempty"`
}
EditsRequest represents a request for edits.
type EditsResponse ¶
type EditsResponse struct {
Object string `json:"object"`
Created int `json:"created"`
Choices []choice `json:"choices"`
Usage usage `json:"usage"`
}
EditsResponse represents a response from the edits endpoint.
type EmbeddingRequest ¶
type EmbeddingRequest struct {
// ID of the model to use.
Model string `json:"model"`
// Input text to get embeddings for, encoded as a string or array of tokens.
// To get embeddings for multiple inputs in a single request, pass an array
// of strings or array of token arrays. Each input must not exceed 2048 tokens in length.
//
// Unless you are embedding code, we suggest replacing newlines (\n) in
// your input with a single space, as we have observed inferior results when newlines are present.
Input []string `json:"input"`
// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
User string `json:"user,omitempty"`
}
EmbeddingRequest represent a request body for the embeddings endpoint.
type EmbeddingsResponse ¶
type EmbeddingsResponse struct {
Object string
Data []data
Usage usage
}
EmbeddingsResponse represents a response from the embeddings endpoint.
type ErrorResponse ¶
type ErrorResponse struct {
*Error `json:"error"`
}
type ImageRequest ¶
type ImageRequest struct {
// The image to edit. Must be a valid PNG file, less than 4MB, and square.
// If mask is not provided, image must have transparency, which will be used as the mask.
Image string `json:"image,omitempty"`
// An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where
// image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.
Mask string `json:"mask,omitempty"`
// A text description of the desired image(s). The maximum length is 1000 characters.
Prompt string `json:"prompt,omitempty"`
// The number of images to generate. Must be between 1 and 10.
N int `json:"n,omitempty"`
// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024
Size string `json:"size,omitempty"`
// The format in which the generated images are returned. Must be one of url or b64_json.
ResponseFormat string `json:"response_format,omitempty"`
// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
User string `json:"user,omitempty"`
}
ImageRequest represent a request body for the image endpoint.
type ImageResponse ¶
type ImageResponse struct {
Created int
Data []data
}
ImageResponse represents a response from the images endpoint.
type ModelResponse ¶
type ModelResponse struct {
Data []data `json:"data"`
}
ModelResponse represents a response from the Models v1 endpoint.