Model name (e.g. "llama3.1:8b").
Input text.
Raw JSONValue generation options (backward-compatible).
Whether to stream the response (not fully supported).
Optional system prompt.
Optional base64-encoded images for multimodal input.
Structured output: JSONValue("json") or a JSON Schema.
Text appended after the generated response.
How long to keep the model loaded (e.g. "5m", "0").
Typed OllamaOptions; takes precedence over options.
A JSONValue containing "response", "done", and metadata.
Generates text based on a prompt using the specified model.