llm 0.32a2
Release: llm 0.32a2 A bunch of useful stuff in this LLM alpha, but the most important detail is this one: Most reasoning-capable OpenAI models now use the /v1/responses endpoint instead of /v1/chat/completions. This enables interleaved reasoning across tool calls for GPT-5 class models. #1435 This means you can now see the summarized reasoning tokens when you run prompts against an OpenAI model, displayed in a different color to standard error. Use the -R or --hide-reasoning flags if you don't…
Soutenez Simon Willison's Weblog en consultant la ressource originale
Lire l'article originalVous aimez découvrir ces sources ?
Soutenez-moi sur Patreon