The ZodStream
client provides real-time validation and metadata for streaming LLM responses:
zod-stream
enables processing dependent data as soon as relevant paths complete, without waiting for the full response:
This approach enables:
- Early UI updates based on user preferences
- Parallel processing of independent data
- Optimistic loading of related content
- Better perceived performance
- Resource optimization
Every streamed chunk includes metadata about validation state:
Get typed stub objects for initialization:
Enable detailed logging for debugging:
The withResponseModel
helper configures OpenAI parameters based on your schema and chosen mode:
zod-stream
supports multiple modes for structured LLM responses:
Built-in parsers handle different response formats:
Handle streaming responses with built-in utilities:
Monitor completion status of specific paths:
zod-stream
provides error handling at multiple levels: