Most API companies still think they are selling to developers.

They are not.

They are selling to developers who are increasingly using AI to explore, compare, test, wire up, and even operate those APIs. And in more cases than people realize, they are also selling to the agents those developers are building.

That changes the whole game.

Because once machines become part of the evaluation and usage path, your API is no longer judged only by whether a smart human can eventually figure it out. It is judged by whether a human-plus-machine workflow can understand it, trust it, and use it quickly enough to make the product feel worth adopting.

I think this is one of the most important GTM shifts in the developer market right now. It sounds technical, but it is really commercial. The API is no longer just a product surface. It is part of the buying motion, the onboarding path, the trust layer, and increasingly the automation layer the customer wants to build on top of.

If your API is not designed for that world, you are not just creating a developer-experience problem. You are creating a growth problem.

The market already tells the story

The clearest recent signal comes from Postman’s 2025 State of the API Report. It says 89% of developers now use generative AI in their daily work, but only 24% actively design APIs with AI agents in mind. That is a huge gap. Developers are already building with AI-native workflows, but most APIs are still designed as if the only consumer is a human reading documentation and manually stitching requests together.

That same report gets even more specific. It says 70% of developers are aware of MCP, but only 10% use it regularly, which tells me the direction is clear even if adoption is still early. It also says 51% of developers see unauthorized or excessive API calls from AI agents as a top security risk, and 65% of organizations now generate revenue from APIs. Those four data points together tell a very useful story:

  • AI is already normal in development work

  • machine-oriented API use is still underdesigned

  • governance is becoming more important, not less

  • APIs are now commercially central enough that this matters beyond engineering

That last point is easy to miss.

If APIs are becoming profit drivers, then API design is no longer only a technical concern. It is part of GTM.

The infrastructure side is moving fast too

This is not just a research-firm trend story. The tooling layer is already adapting around it.

Anthropic’s official documentation for the Model Context Protocol describes MCP as an open protocol that standardizes how applications provide context to LLMs — essentially a universal connection layer between models and tools. Anthropic now supports MCP across multiple products, including Claude Code and its Messages API.

OpenAI is moving in the same direction from the agent tooling side. In its March 2025 launch of new tools for building agents, OpenAI introduced the Responses API, specifically framing agents as systems that independently accomplish tasks on behalf of users. Then in its May 2025 update, it added support for remote MCP servers and emphasized that hundreds of thousands of developers were already using the API to build agentic applications.

That matters because it makes the shift concrete.

This is not a speculative future where maybe some people will use AI to call APIs.

The major infrastructure providers are already building around that assumption.

The harsh truth

A lot of APIs are still designed like they are selling to someone patient.

That used to be good enough.

A strong developer could read the docs, reverse-engineer the intent, ask a teammate, hunt through examples, guess around odd error messages, and eventually get something working.

That is not the bar anymore.

The new bar is much tougher and much more interesting:

Can a developer using AI — or an AI agent working on the developer’s behalf — understand your API without needing tribal knowledge?

If the answer is no, the problem is not just usability.

It is GTM friction.

Because the first sales motion for a lot of developer products now looks like this:

  • the developer asks AI for options

  • AI summarizes likely tools

  • the developer scans docs and examples

  • AI writes the first implementation attempt

  • the developer or agent tests the result

  • early errors either build trust or destroy it

That means your API is now being sold through:

  • schemas

  • examples

  • error clarity

  • rate-limit behavior

  • auth predictability

  • machine-readable structure

  • how cleanly the API fits into an AI-assisted workflow

This is not a normal product detail anymore.

It is part of distribution and conversion.

Why this changes GTM, not just engineering

I think there are four commercial effects here.

1. The shortlist gets built differently

If developers are increasingly using AI to evaluate tools, your product can get filtered out before a human ever reaches the “book demo” stage. Weak documentation, inconsistent endpoint design, vague auth flows, and poor examples do not just create frustration later. They weaken early discovery and confidence.

2. Activation depends on machine legibility

A lot of first-use friction now gets amplified by AI. That sounds counterintuitive, but it is true. AI can speed up integration if the API is clean. If the API is inconsistent, under-documented, or too full of edge-case weirdness, AI just helps the user hit the wall faster.

3. Security and governance become part of the product story

Postman’s data is useful here again: developers are excited about agents, but they are also worried about unauthorized access and over-scoped behavior. So the new sales question is not only “does your API work well?” It is also “can we trust it in an agent-driven environment?”

4. The API becomes part of the brand promise

If your company says it is modern, composable, and built for the next generation of software, the API has to prove that. In a machine-mediated buying world, your product promise becomes easier to test and easier to disprove.

My rule: build for the AI-assisted developer first, and the agent second

I do not think most companies should overreact and start building only for autonomous agents tomorrow.

That is too easy to turn into hype.

The stronger move is simpler: build for the reality that the developer evaluating your product is already AI-assisted.

That means your API should work extremely well when:

  • a developer asks AI to summarize it

  • AI generates first-pass code against it

  • the developer uses docs and examples to debug that code

  • the developer later hands part of the workflow to an agent

That sequence feels much more commercially real than pretending the market has already become fully agentic.

The practical fix: run an agent-readiness API audit

If I were helping an API company fix this today, I would not start with a giant redesign. I would run a very practical audit in six parts.

Step 1: Check machine readability

Can your API be understood structurally?

I would inspect:

  • OpenAPI completeness

  • example quality

  • typed errors

  • consistent naming

  • predictable response formats

  • clear auth patterns

If the API requires too much hidden knowledge to use safely, it will not perform well in AI-assisted development.

Step 2: Test the first-use path with AI in the loop

This is where teams learn quickly.

Ask a developer and an AI assistant to:

  • authenticate

  • make the first useful call

  • handle one failure

  • implement one realistic use case

Then record where the process breaks.

Not theoretically. Actually.

If the AI generates misleading code because your examples are weak, that is not the model’s fault alone. It is feedback about the product surface.

Step 3: Add “when to use this” context, not just “how”

This is one of the biggest missed opportunities in API docs.

Machines and humans both need more than syntax. They need intent.

For key endpoints or workflows, explain:

  • when to use this

  • when not to use this

  • common prerequisites

  • expected tradeoffs

  • likely failure modes

That makes both onboarding and AI interpretation much stronger.

Step 4: Design for safe automation

This is where the GTM and security stories meet.

If you want customers to feel confident about agent use, your API should make safe behavior easier.

That means:

  • scoped permissions

  • clear rate-limit contracts

  • audit-friendly actions

  • predictable auth refresh

  • obvious boundaries between read and write behavior

  • documentation that explains safe patterns, not just possible ones

Step 5: Build one “best tasks for agents” page

I love this as a product-marketing move.

Do not vaguely say your API is “AI-ready.”

Say exactly which jobs it is good for.

Examples:

  • triaging tickets

  • enriching CRM records

  • building internal dashboards

  • pulling event data into alerts

  • updating structured systems with human approval

This turns a vague future-facing claim into a clear commercial entry point.

Step 6: Measure machine-assisted activation

I would track:

  • time to first successful call

  • documentation pages used before first success

  • common AI-generated implementation failures

  • auth-related drop-off

  • percentage of successful first-use flows that involve AI assistance

  • conversion rate by use case, not only by signup source

That gives the GTM team a much more accurate picture of what the API is actually doing in the real buying journey.

A worked example

Imagine two API products in the same category.

Product A

  • strong feature set

  • docs written mostly for experienced humans

  • inconsistent examples

  • unclear errors

  • basic OpenAPI coverage

  • vague “AI-ready” marketing

Product B

  • slightly smaller feature set

  • clean, structured docs

  • detailed examples

  • typed errors

  • strong auth explanation

  • one “best workflows for agents” page

  • consistent endpoint patterns

Five years ago, Product A might still win because smart developers could brute-force their way through the friction.

Now I think Product B has a real GTM advantage.

Why?

Because it is easier to discover, easier to trial, easier to understand, easier to integrate with AI help, and easier to trust in a machine-assisted workflow.

That means:

  • better activation

  • faster team expansion

  • cleaner word of mouth

  • lower support burden

  • stronger enterprise conversations later

That is not just good engineering. That is better commercial design.

What I would do this quarter

If I were leading GTM for an API business, I would run this 30-day plan.

  1. Ask AI assistants to compare your API to the top alternatives in your category.

  2. Test whether the generated explanation is correct, sharp, and useful.

  3. Run three real first-use journeys with AI-assisted developers.

  4. Fix the top five machine-readability issues in docs, schemas, or examples.

  5. Publish one “best tasks for agents” and one “how to use this safely with AI” page.

  6. Review activation data through the lens of first success, not just signups.

That is enough to get much smarter quickly.

My practical take

One of the more useful truths in developer GTM right now is that the API is no longer only being sold at the level of product marketing or sales.

It is being sold in the moment an AI-assisted developer tries to use it.

That is a very different market dynamic than the one a lot of companies are still designed for.

The good news is that this creates an advantage for serious builders.

You do not need to out-hype the category. You need to be easier to understand, easier to wire up, safer to automate, and clearer about where your API creates value.

That is what wins in a market where developers are using AI and agents are becoming real API consumers.

Your API is still selling to developers.

It is just not selling to developers alone anymore.

Keep Reading