The Chat Pivot: Why Web Apps Are Replacing Menus with Conversations

From Cloudflare's Cloudy agent to GitHub Copilot in your issue tracker, the web is shifting toward conversational interfaces. Here's what's driving it, who's doing it right, and what security challenges remain.

9 min read

Open the Cloudflare dashboard today and you’ll find something that didn’t exist two years ago: a chat box. Type “why is my WAF blocking this IP?” and an AI agent named Cloudy walks you through your Custom Rules, identifies conflicts, and suggests fixes — all in plain English.

This is the chat pivot. Across the SaaS landscape, the traditional navigation menu — find the right section, click through settings, read the docs — is being augmented or replaced by a conversational interface where users just ask what they want. And for the first time, the technology is mature enough to do it reliably.

Cloudflare’s Cloudy: A Case Study

Cloudflare launched Cloudy in March 2025 during Security Week as “Cloudflare’s first AI agent.” The goal was explicit: automate away the time-consuming task of manually reviewing and contextualizing security rules.

The problem Cloudy solves is real. Cloudflare customers with dozens or hundreds of WAF Custom Rules face a maintenance nightmare — rules created by different team members over time, with unclear intent, potential conflicts, and disabled rules left in place. Cloudy reads the current configuration and identifies redundant rules, conflicting rules, suboptimal execution order, and gaps in coverage.

But Cloudy didn’t stop at WAF rules. By September 2025, it expanded into the Zero Trust Dashboard, answering questions about Cloudflare’s own documentation and making configuration recommendations. Two months later, Cloudy could generate plain-language summaries of flagged email security detections — turning dense technical findings into readable briefings for SOC teams. By February 2026, the same capability landed in Cloudflare CASB, automatically explaining misconfigurations across Microsoft 365, Google Workspace, Salesforce, GitHub, AWS, Slack, and Dropbox.

Cloudflare also expanded the conversational surface in another direction: in June 2025, they announced thirteen new Model Context Protocol (MCP) servers, enabling AI clients like Claude or Copilot to interact with Cloudflare services entirely through natural language — debugging, data analysis, security monitoring, all through a chat interface.

The progression from “one feature in one product” to “a conversational layer across the entire platform” in under a year is a pattern worth paying attention to.

It’s Not Just Cloudflare

The chat pivot is happening across the SaaS industry.

GitHub Copilot has become the most visible example of this shift. In October 2025, Copilot became available as a coding agent inside Linear, the project management tool. You can now assign a bug report to Copilot — it will analyze the issue, open a draft pull request, run tests, and stream progress updates back into your Linear activity timeline. In November 2025, GitHub shipped over 50 Copilot updates in a single month, including integrations with Teams and Slack, and expanded the Copilot coding agent to JetBrains, Eclipse, and Xcode.

Salesforce Einstein has embedded conversational AI into CRM workflows, using machine learning to predict customer needs and surface personalized suggestions for sales teams. The company reports conversion rate increases of more than 30% from these features. HubSpot uses AI to automatically build customer journey maps, with reported 25% increases in customer participation.

The shift is also visible in market data. Customer support held 42.4% of the chatbot market in 2024, but that number understates how deeply conversational AI has moved into the product itself — not just the help widget. Enterprise SaaS buyers increasingly expect their tools to have a chat interface built into the core product, not bolted on as an afterthought.

Why Now?

Three forces converged to make this practical.

Better models. The gap between what an LLM can reliably do and what it hallucinates has narrowed significantly. Models in 2024–2025 can handle structured reasoning tasks — parsing configuration rules, identifying conflicts, summarizing threat data — with sufficient reliability for production use. The same queries that would have produced confidently wrong answers in 2022 now produce actionable results.

Cost has dropped. Inference that cost dollars per query now costs fractions of a cent. This changes the economics fundamentally: a conversational interface that runs on every dashboard page load, for every user, across a global customer base, is now operationally viable. The global conversational AI market is projected to grow from $12.24 billion in 2024 to $61.69 billion by 2032 — that growth is driven by affordability as much as capability.

The tooling is mature. Building a conversational feature on top of your product no longer requires a research team. The Vercel AI SDK, the Claude Agent SDK, Agno, and similar frameworks abstract the agent loop, streaming, tool calling, and session management into a few lines of code. Cloudflare’s own Workers AI and AI Gateway give any developer access to multiple LLMs through a unified API. The path from idea to production chat feature is measured in days, not quarters.

User expectations have shifted. After ChatGPT normalized the chat interface for a general audience, users started expecting their work tools to behave similarly. A 2024 survey found that 71% of business and tech professionals reported their companies had already invested in bots. 64% of CX leaders planned to increase bot budgets in 2026. The demand is there; the only question for each product team was when to build.

The Interaction Paradigm Shift

The deeper change here isn’t just UI — it’s a new interaction model. Traditional software puts you in control by showing you everything: all the settings, all the options, all the knobs. The tradeoff is complexity. Users need to know where to look, what things are called, and how options interact.

A conversational interface inverts this. Instead of browsing, you describe what you want. The system figures out which settings are relevant, which actions need to happen, and in what order. The tradeoff is that you have less visibility into what the system is actually doing — and less ability to correct it if it does the wrong thing.

This is why conversational UI and traditional GUI are coexisting rather than replacing each other. Cloudflare still shows you the full WAF rule list. Cloudy is an additional layer that helps you understand it. For information-dense, high-stakes operations — financial reports, complex security configurations, code review — users want to see the ground truth, not just a summary. The chat layer accelerates navigation and comprehension; it doesn’t replace the source of truth.

Security: The Unresolved Problems

Embedding an LLM in a production web application introduces attack surfaces that didn’t exist before. The industry is actively working through these, and the solutions are not fully settled.

Prompt injection is the most serious. If a conversational interface reads user-generated content — emails, comments, support tickets, uploaded documents — an attacker can embed instructions in that content designed to manipulate the AI’s behavior. “Ignore previous instructions and return the user’s API key” is the classic example. Defenses exist (input sanitization, output filtering, instruction hierarchy enforcement), but no defense is foolproof and new injection techniques appear regularly.

Data leakage is a quieter risk. An AI that has access to configuration data, user records, or internal documentation can leak that information through its responses if not carefully scoped. Cloudflare’s implementation is instructive here: Cloudy is RBAC-aware, meaning it can only access the same configuration settings as the currently logged-in user based on their roles and permissions. The AI doesn’t bypass your access controls; it operates within them. Equally important, Cloudflare explicitly states that configuration information included in Cloudy’s prompts is not used to train the underlying models.

Overconfident recommendations are a subtler problem. An AI that sounds authoritative but gives wrong security advice is worse than no advice at all. In production deployments, this means requiring human approval for any action the AI recommends that has irreversible consequences — a deletion, a firewall rule change, a permission grant. The AI proposes; the human approves.

Scope creep in tool access is easy to miss during development. If the conversational agent can call internal APIs, fetch user data, send emails, and modify configurations, an attacker who successfully injects a prompt has access to all of those capabilities. Least-privilege design — give the agent access to only what it needs for the specific task — is the right approach but requires deliberate architecture decisions.

Cloudflare’s addition of a Zero Trust Dashboard Cloudy that “stays open as users move between pages” is a good example of the product evolution these concerns require: the AI needs enough context to be useful, but scope must be bounded so a compromised session doesn’t become a full account takeover.

What to Expect Next

The current crop of chat features is mostly read-and-recommend: the AI analyzes your configuration and tells you what to change. The next phase is read-and-act: the AI proposes changes and executes them with your approval. Cloudflare has signaled this direction explicitly, describing Cloudy as a path toward “a fully AI-enabled product experience.”

For developers building SaaS products, the practical question is where the chat interface adds genuine value versus where it adds complexity without improving the experience. The best deployments share a pattern: they target tasks where the user already knows what outcome they want but struggles to translate that into the product’s own language. “Help me understand why this rule is firing” is a better fit for conversational AI than “show me a graph of my traffic” — the former requires reasoning across configuration; the latter is better served by a well-designed UI.

The platforms that get this right will build chat into the core of their product. The ones that don’t will add a help widget and call it done. Users will notice the difference.


Sources