OllamaClient

A client class for interacting with the Ollama REST API.

Provides methods for text generation, chat, embeddings, tool calling, and model management using std.net.curl for HTTP and std.json for JSON.

Constructors

this
this(string host)

Constructs a new Ollama client.

Members

Functions

chat
JSONValue chat(string model, Message[] messages, JSONValue options, bool stream, Tool[] tools, JSONValue format, string keepAlive, OllamaOptions opts)

Engages in a chat interaction using the specified model and message history.

chatCompletions
JSONValue chatCompletions(string model, Message[] messages, int maxTokens, float temperature, bool stream)

Performs an OpenAI-style chat completion.

chatStream
void chatStream(string model, Message[] messages, StreamCallback onChunk, Tool[] tools, JSONValue format, string keepAlive, OllamaOptions opts)

Streaming chat — calls onChunk for every assistant token.

completions
JSONValue completions(string model, string prompt, int maxTokens, float temperature, bool stream)

Performs an OpenAI-style text completion.

copy
JSONValue copy(string source, string destination)

Copies an existing model to a new name.

createModel
JSONValue createModel(string name, string modelfile)

Creates a custom model from a modelfile.

deleteModel
JSONValue deleteModel(string name)

Deletes a model from the Ollama server.

embed
JSONValue embed(string model, string input, string keepAlive)

Generates an embedding vector for a single text input.

embed
JSONValue embed(string model, string[] inputs, string keepAlive)

Generates embedding vectors for a batch of text inputs.

generate
JSONValue generate(string model, string prompt, JSONValue options, bool stream, string system, string[] images, JSONValue format, string suffix, string keepAlive, OllamaOptions opts)

Generates text based on a prompt using the specified model.

generateStream
void generateStream(string model, string prompt, StreamCallback onChunk, string system, string[] images, JSONValue format, string keepAlive, OllamaOptions opts)

Streaming text generation — calls onChunk for every response token.

getModels
string getModels()

Lists models in OpenAI-compatible format.

getVersion
string getVersion()

Retrieves the Ollama server version string.

listModels
string listModels()

Lists all locally available models.

ps
string ps()

Lists currently running (loaded) models.

pull
JSONValue pull(string name, bool stream)

Downloads a model from the Ollama registry.

push
JSONValue push(string name, bool stream)

Uploads a model to the Ollama registry.

setTimeOut
void setTimeOut(Duration timeout)

Sets the timeout for HTTP requests.

showModel
string showModel(string model)

Retrieves detailed information about a specific model.

Examples

auto client = new OllamaClient();
auto resp = client.chat("llama3", [Message("user", "Hi there!")]);
writeln(resp["message"]["content"].str);

Meta