main fun part – since responses are stored for free by default now, how can we abuse the Responses API as a database :)
other fun qtns that a HN crew might enjoy:
– hparams for websearch – depth/breadth of search for making your own DIY Deep Research
– now that OAI is offering RAG/reranking out of the box as part of the Responses API, when should you build your own RAG? (i basically think somebody needs to benchmark the RAG capabilities of the Files API now, because the community impression has not really updated from back when Assistants API was first launched)
– whats the diff between Agents SDK and OAI Swarm? (basically types, tracing, pluggable LLMs)
– will the `search-preview` and `computer-use-preview` finetunes be merged into GPT5?
They did not announce the price(s) in the presentation. Likely because they know it is going to be very expensive:
Web Search [0]
* $30 and $25 per 1K queries for GPT‑4o search and 4o-mini search.
File search [1]
* $2.50 per 1K queries and file storage at $0.10/GB/day
* First 1GB is free.
Computer use tool (computer-use-preview model) [2]
* $3 per 1M input tokens and $12/1M output tokens.
Nice to finally see one of the labs throwing weight behind a much needed simple abstraction. It's clear they learned from the incumbents (langchain et al)– don't sell complexity.
Also very nice of them to include extensible tracing. The AgentOps integration is a nice touch to getting behind the scenes to understand how handoffs and tool calls are triggered
This is one of the few agent abstractions I've seen that actually seems intuitive. Props to the OpenAI team, seems like it'll kill a lot of bad startups.
Which could be combined with the query_kb tool from the mr_kb plugin (in my mr_kb repo) which is actually probably better than File Search because it allows searching multiple KBs.
Anyway, if anyone wants to help with my program, create a plugin on PR, or anything, feel free to connect on GitHub, email or Discord/Telegram (runvnc).
> “we plan to formally announce the deprecation of the Assistants API with a target sunset date in mid-2026.”
The new Responses API is a step in the right direction, especially with the built-in “handoff” functionality.
For agentic use cases, the new API still feels a bit limited, as there’s a lack of formal “guardrails”/state machine logic built in.
> “Our goal is to give developers a seamless platform experience for building agents”
It will be interesting to see how they move towards this platform, my guess is that we’ll see a graph-based control flow in the coming months.
Now there are countless open-source solutions for this, but most of them fall short and/or add unnecessary obfuscation/complexity.
We’ve been able to build our agentic flows using a combination of tool calling and JSON responses, but there’s still a missing higher order component that no one seems to have cracked yet.
Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.
Our site uses cookies. Learn more about our use of cookies: cookie policyACCEPTREJECT
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
13 Comments
nnurmanov
Does anyone know if there are any difference if you typed the question with typos vs you did it correctly?
swyx
swyx here. we got some preview and time with the API/DX team to ask FAQs about all the new APIs.
https://latent.space/p/openai-agents-platform
main fun part – since responses are stored for free by default now, how can we abuse the Responses API as a database :)
other fun qtns that a HN crew might enjoy:
– hparams for websearch – depth/breadth of search for making your own DIY Deep Research
– now that OAI is offering RAG/reranking out of the box as part of the Responses API, when should you build your own RAG? (i basically think somebody needs to benchmark the RAG capabilities of the Files API now, because the community impression has not really updated from back when Assistants API was first launched)
– whats the diff between Agents SDK and OAI Swarm? (basically types, tracing, pluggable LLMs)
– will the `search-preview` and `computer-use-preview` finetunes be merged into GPT5?
baxtr
A bit off topic but the post comes handy: can we settle the debate what an agent really is? It seems like everyone has their own definition.
Ok I’ll start: an agent is a computer program that utilized LLMs heutiger for decision making.
zellyn
Notably not mentioned: Model Context Protocol
https://www.anthropic.com/news/model-context-protocol
rvz
They did not announce the price(s) in the presentation. Likely because they know it is going to be very expensive:
[0] https://platform.openai.com/docs/pricing#web-search
[1] https://platform.openai.com/docs/pricing#built-in-tools
[2] https://platform.openai.com/docs/pricing#latest-models
Areibman
Nice to finally see one of the labs throwing weight behind a much needed simple abstraction. It's clear they learned from the incumbents (langchain et al)– don't sell complexity.
Also very nice of them to include extensible tracing. The AgentOps integration is a nice touch to getting behind the scenes to understand how handoffs and tool calls are triggered
serjester
This is one of the few agent abstractions I've seen that actually seems intuitive. Props to the OpenAI team, seems like it'll kill a lot of bad startups.
ilaksh
The Agents SDK they linked to comes up 404.
BTW I have something somewhat similar to some of this like Responses and File Search in MindRoot by using the task API: https://github.com/runvnc/mindroot/blob/main/api.md
Which could be combined with the query_kb tool from the mr_kb plugin (in my mr_kb repo) which is actually probably better than File Search because it allows searching multiple KBs.
Anyway, if anyone wants to help with my program, create a plugin on PR, or anything, feel free to connect on GitHub, email or Discord/Telegram (runvnc).
anorak27
I have built myself a much simpler and powerful version of the responses API and it works with all LLM providers.
https://github.com/Anilturaga/aiide
nextworddev
This may be bad for Langflow, Langsmith, etc
nowittyusername
How does this compare to MCP? Anyone has any considerations on the matter?
mentalgear
Well, I'll just wait 2-3 days until a (better) open-source alternative is released. :D
jumploops
> “we plan to formally announce the deprecation of the Assistants API with a target sunset date in mid-2026.”
The new Responses API is a step in the right direction, especially with the built-in “handoff” functionality.
For agentic use cases, the new API still feels a bit limited, as there’s a lack of formal “guardrails”/state machine logic built in.
> “Our goal is to give developers a seamless platform experience for building agents”
It will be interesting to see how they move towards this platform, my guess is that we’ll see a graph-based control flow in the coming months.
Now there are countless open-source solutions for this, but most of them fall short and/or add unnecessary obfuscation/complexity.
We’ve been able to build our agentic flows using a combination of tool calling and JSON responses, but there’s still a missing higher order component that no one seems to have cracked yet.