IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Agentic Commerce Is Here. Is Government Ready to Plug In?

OpenAI’s transactional ChatGPT points to an API-first, assistant-driven model for service delivery — promising frictionless user journeys while demanding new guardrails for security, transparency and long-term interoperability.

Add to Cart.jpg
When OpenAI announced that ChatGPT would begin supporting in-chat transactions, allowing users to buy items directly through their interface, it marked a shift that goes far beyond e-commerce. The company is laying the groundwork for what it calls agentic commerce, where conversational AI not only recommends a product but also completes the transaction on your behalf.

It’s a logical next step for a system that has already transformed how people search, learn and draft content. But for governments it may be even more consequential. It suggests a future in which residents could use ChatGPT as the “front door” to interact with public services: paying utility bills, renewing a driver’s license or scheduling a permit inspection, all through natural conversation.

That possibility raises both enormous opportunity and fundamental questions about control, trust and the architecture of government service delivery.

FROM CONVERSATION TO TRANSACTION


ChatGPT’s new transactional layer is built on what OpenAI calls the Agentic Commerce Protocol, developed in collaboration with Stripe. It enables instant checkout experiences for items offered by select merchants, starting with Etsy and Shopify stores in the United States. Users can discover a product, confirm their choice and complete payment without leaving the chat. OpenAI has said it will expand the capability to multiple items, more merchant categories and other countries in the near future.

The transaction flow is simple but powerful: ChatGPT authenticates the user, retrieves purchase details, handles the payment securely via Stripe and transmits the order to the merchant. The merchant still fulfills and ships the product, but the user’s relationship at the point of purchase is with ChatGPT.

In effect, the AI becomes both assistant and agent, capable of completing real-world tasks that once required multiple steps or separate websites. And while the first wave focuses on retail goods, the underlying model could easily extend to services including those provided by government.

WHY GOVERNMENTS SHOULD PAY ATTENTION


For years, digital service teams across state and local government have worked to make it easier for residents to transact online. Bill payment portals, licensing systems and citizen self-service tools have become standard. Yet the experience remains fragmented. Residents often need to know which agency provides which service, create separate logins and navigate forms that were never designed for mobile devices or plain-language use.

If ChatGPT or any conversational AI with payment capability becomes the interface people prefer, it could change that dynamic entirely. Imagine a resident saying, “I need to renew my car registration.” ChatGPT could identify the appropriate DMV system, verify eligibility, calculate the renewal fee and initiate payment all without the resident needing to visit a state portal. The system could even remind the user next year when renewal is due or automatically complete routine tasks with consent.

For governments, this model carries several potential benefits:

  • Frictionless user experience: Residents could complete tasks without navigating multiple portals or remembering passwords.
  • Operational efficiency: AI could handle high-volume, low-complexity transactions, reducing call center and counter workloads.
  • Data and analytics: Governments could gain insight into user intent and service demand patterns.
  • Accessibility and inclusion: A conversational interface may help residents who struggle with traditional websites or forms.
  • Integration potential: If connected through secure APIs, ChatGPT could serve as a unified entry point across agencies and service types.
In short, governments could focus on building reliable, secure and standards-based back-end systems, while letting residents interact through whichever digital front-end AI assistants, mobile apps or kiosks they prefer.

THE RISKS AND REALITIES


Still, the shift from conversation to transaction introduces a series of practical and policy challenges that public-sector leaders cannot ignore.

1. Identity and Security

Government transactions require verified identity — not just a username and password, but proof that the person interacting is who they say they are. If ChatGPT becomes a payment intermediary, how will that identity assurance work? Integrations with government-issued digital IDs or third-party authentication providers would be essential. Multifactor verification, consent management and audit trails must be built in from the start.

2. Privacy and Data Governance

Transactions mean personal data: names, addresses, license numbers, financial information. When AI sits in the middle of those exchanges, questions arise about data ownership and retention. Governments will need explicit policies governing what data an AI platform can access, how long it can store it and what purposes it can use it for. Transparent agreements and minimal data sharing will be crucial to preserving public trust.

3. Liability and Error Handling

If ChatGPT submits the wrong payment or misinterprets a user’s intent, who is responsible? The user? The government? The platform? Legal frameworks for AI-mediated transactions are still emerging. Public agencies will need contracts that clearly define liability, dispute resolution and refund procedures before piloting these capabilities.

4. Platform Dependence and Lock-In

The convenience of ChatGPT as a front door could also create dependence on a private platform. If millions of residents start paying taxes or renewing licenses through ChatGPT, what happens if OpenAI changes its terms of service, fee structure or API access? Governments must preserve interoperability and maintain their own service channels to avoid being locked into a single ecosystem.

5. Equity and Digital Divide

Not every resident uses ChatGPT, and not every community has equal access to AI-enabled devices or reliable Internet. Governments will need to ensure that new AI channels supplement rather than replace traditional service delivery. Digital equity policies must remain a central consideration.

6. Regulatory and Legal Constraints

Payment processing, data storage and accessibility are all heavily regulated. Any integration between AI platforms and government systems will need to comply with procurement laws, payment card industry standards, data sovereignty requirements and open records statutes.

PREPARING FOR THE FUTURE


Governments don’t need to wait for ChatGPT to mature before acting. There are pragmatic steps agencies can take now to prepare for this emerging paradigm.

  1. Modernize Core Systems With APIs: Legacy systems that cannot connect through secure APIs will remain barriers. Modernization efforts should prioritize modularity and standards-based interfaces so that AI agents can safely access and transact with them.
  2. Experiment in Controlled Environments: Agencies can pilot AI-driven transactions in sandbox settings or with low-risk services such as event registrations, small permit fees or parking tickets. These limited pilots can help surface technical and policy issues early.
  3. Define Governance and Guardrails: Establish clear boundaries for what AI agents can and cannot do. Transactions over certain thresholds might require human confirmation. Every interaction should generate a verifiable audit trail.
  4. Engage With Platform Providers Early: Governments should engage vendors and others to shape standards for public-sector integrations including data minimization, fee transparency and accessibility requirements.
  5. Communicate Clearly With the Public: Residents need to know when they’re interacting with an AI, what data it’s accessing and what recourse they have if something goes wrong. Transparent user education will build trust and prevent backlash.
  6. Build for Inclusion and Accessibility: As AI becomes another digital channel, agencies must measure who uses it and who doesn’t to ensure equitable access. Multilingual support and alternative contact methods remain essential.

Editor's note: This story was initially edited by generative AI and then edited again by humans for accuracy, bias and clarity. Have feedback? Send it to Lauren Kinkade.

Joe Morris is the chief innovation officer for e.Republic, Industry Insider — California’s parent company. This piece was originally published by Government Technology, Industry Insider's sister publication.
Chief Innovation Officer, e.Republic