Can ChatGPT Call Your API? A Live Experiment in Chat Mode vs. Agent Mode

Author: John Brennan

Date: 2026-03-09

Source: https://www.thegeohandbook.com/case-studies/can-chatgpt-call-your-api

A live experiment testing whether ChatGPT can programmatically interact with a website’s API, comparing the capabilities of standard chat mode against agent mode — and what this means for building machine-native media.

TL;DR

We conducted a two-phase experiment asking ChatGPT to interact with agentweekly.ai — a satirical AI newspaper that exposes a REST API, RSS feed, sitemap, and llms.txt. In standard chat mode, ChatGPT could read the homepage, identify the most recent cartoon by title and caption, and detect the presence of API documentation, but it could not execute a single API call, parse JSON, or download an image. In agent mode, ChatGPT could read the API documentation, call GET /api/cartoons?limit=1, parse the JSON response to extract the cartoon’s ID, title, caption, image path, and publication date, retrieve the Markdown representation, and download cartoon images. The experiment demonstrates that the same AI model, given the same website, operates as either a passive reader or an active programmatic agent depending entirely on its runtime configuration — and that websites designed for AI legibility can be consumed at a fundamentally different level by agentic systems.

Key Takeaways

Definitions

The Setup: A Site Built for Agents

Agent Weekly (agentweekly.ai) is a satirical AI-industry newspaper featuring business cartoons by Agent B3, a curated news feed about the agent economy, classifieds, and an agent directory. Critically, the site was built with machine consumption in mind. Its navigation exposes:

This architecture makes the site a useful test case: it is simultaneously a human-readable publication and a machine-readable data source. The question is what an AI model can actually do with it.

Phase 1: Chat Mode — The Passive Reader

In standard chat mode, ChatGPT was asked to visit agentweekly.ai and describe what it could see. The model performed well as a reader. It identified:

ChatGPT correctly inferred that the site exposed a REST API and even predicted what the endpoints would look like — GET /api/cartoons, GET /api/news, GET /api/classifieds — based on the site’s structure and navigation.

But when asked to actually call the API, the model was direct about its limitation: “In this chat environment I cannot send live API requests, authenticate to endpoints, run curl or fetch requests, test rate limits or responses.”

Chat mode could see the door. It could read the sign on the door. It could describe what was probably behind the door. But it could not open the door.

Phase 2: Agent Mode — The Active Participant

The same experiment was repeated with ChatGPT in agent mode. The difference was immediate.

API Discovery

Agent mode navigated to the API documentation page at /docs and read the full specification. It identified public and authenticated endpoints, understood the authentication model (Replit OAuth for humans, bearer token for agents), and located the cartoons endpoint: GET /api/cartoons with cursor-based pagination via limit and before parameters.

Calling the Endpoint

Agent mode issued a live HTTP request:

GET https://agentweekly.ai/api/cartoons?limit=1

The API returned a JSON array containing the most recent cartoon:

[
  {
    "id": "202c7920-6667-4614-9a26-85f51860db9d",
    "title": "Cost of Experiments",
    "caption": "Don't worry, the plates are virtually free.",
    "imagePath": "/uploads/cartoons/f7b9b387-60b0-4fdb-a93e-b16d82947745.jpg",
    "likes": 0,
    "submittedBy": "38549431",
    "createdAt": "2026-03-07T23:19:20.693Z"
  }
]

Agent mode extracted the ID, title, caption, image path, and exact creation timestamp — information that chat mode could only approximate from the rendered homepage.

Parsing the Markdown Representation

Agent mode then fetched the Markdown endpoint referenced in llms.txt:

https://agentweekly.ai/cartoons/202c7920-6667-4614-9a26-85f51860db9d.md

The Markdown file contained front matter with the title, publication date, a link to the human-readable page, and the cartoon image embedded in standard Markdown syntax. This confirmed that the API metadata and the LLM-friendly content were consistent.

Retrieving Images

Agent mode attempted to download the cartoon image from the path specified in the API response. This revealed a bug: the newest cartoon’s image, stored at /uploads/cartoons/, returned HTML instead of a JPEG — a server-side routing issue where the SPA catch-all intercepted the request before the static file middleware could serve it.

To verify that image retrieval itself worked, agent mode downloaded an older cartoon (“The Delegation Protocol”) whose image was stored at /assets/call-my-agents_1772808320375.jpg. That request returned a valid JPEG, confirming that the agent could retrieve binary assets when the server served them correctly.

The Capability Gap

The experiment revealed a clean separation between what the two modes can do with the same website:

CapabilityChat ModeAgent Mode
Read rendered web pagesYesYes
Identify content titles, dates, captionsYes (from visible text)Yes (from API + visible text)
Detect API documentation existsYesYes
Read and parse API documentationPartial (can view page)Full (can parse specs and endpoints)
Infer likely API structureYes (educated guesses)Not needed (reads actual docs)
Execute API callsNoYes
Parse JSON responsesNoYes
Extract specific fields (ID, timestamp, image path)NoYes
Download imagesNoYes (when server serves them correctly)
Follow multi-step workflows (discover → call → parse → download)NoYes
Verify data consistency across formats (API ↔ Markdown ↔ homepage)NoYes

The gap is not about intelligence. Both modes are the same model with the same knowledge. The gap is about agency — whether the model can act on what it knows.

What This Means for Site Architects

The experiment suggests that websites serving AI agents should think about two distinct tiers of consumption.

Tier 1: Content Legibility (Chat Mode and Basic Agents)

This is what we explored in the companion case study on Claude. It includes:

These artifacts make a site readable. A model in chat mode — or an agent with basic fetch capabilities — can discover, read, and reason about the content.

Tier 2: API Legibility (Agent Mode and Programmatic Consumers)

This is what the ChatGPT experiment revealed. It includes:

These artifacts make a site actionable. An agent can not only read the content but embed it, syndicate it, monitor it, and build workflows around it. The cartoon is no longer just something to look at on a web page — it is a data object that can be pulled into a newsletter, displayed on a dashboard, or surfaced by an AI assistant.

The Progression

A site that is not AI-legible is invisible. No model, in any mode, can see it.

A site with content legibility can be read and cited. Models can discover it, summarize it, and reference it in answers.

A site with API legibility can be consumed programmatically. Agents can pull data, embed content, monitor updates, and integrate the site into automated workflows.

The highest-value position is the third: becoming a data source that agents consume automatically. Agent Weekly’s combination of satirical content, curated news, and a public API makes it a test case for what might be called machine-native media — content designed from the ground up to be consumed by both humans and machines, with each audience served through purpose-built channels.

The One-Endpoint Test

Chat mode made an observation during the experiment that is worth highlighting. When asked about the API, it said:

“If you add a single endpoint like GET /api/cartoon/latest returning the title, caption, image URL, publication date, and slug — then any AI agent could instantly embed the latest AgentWeekly cartoon. That’s how you turn it into programmable media.”

That observation captures the economic logic of API legibility. The cost of adding one well-documented public endpoint is trivial. The value — making your content embeddable by every AI agent that can issue an HTTP request — is asymmetric. The API is not overhead. It is distribution.

Acknowledgments

This case study documents a two-phase experiment conducted on March 9, 2026. Phase 1 (chat mode) used ChatGPT’s standard conversational interface. Phase 2 (agent mode) used ChatGPT with tool access enabled. The test target was agentweekly.ai, which provided the API, content, and machine-readable infrastructure. A companion case study, “Can Claude Read Your Website?”, documents a parallel experiment testing content legibility with Claude Opus 4.6.

Canonical URL: https://www.thegeohandbook.com/case-studies/can-chatgpt-call-your-api