Identic AI: Why Personalized Agents Are the Next Layer of Enterprise AI

Enterprise AI doesn't have to repeat the oldest mistake in IT. A three-layer model — shared data, standardized skills, and personalized agents — lets organizations amplify individual judgment instead of standardizing it away. The technology exists today.

Identic AI: Why Personalized Agents Are the Next Layer of Enterprise AI
Photo by Jan Böttinger / Unsplash

Most enterprise AI is built around centralized agent deployments, which creates a tension familiar to anyone in a large organization. Anyone who has watched a multimillion-dollar IT system go underused because it was designed in a conference room three levels removed from the end user knows this tension well. It misses context, and it misses the meaningful differences between users. Enterprise AI risks facing the same problem, but it doesn't need to. Organizations can instead create room for personalized AI within the enterprise.

Don Tapscott and Joseph Bradley call the underlying concept Identic AI and apply it across personal and professional life. But I want to make a particular argument for its use at work, especially since the technology to make it work already exists.

What Identic AI Means

In their recent book You to the Power of Two: Redefining Human Potential in the Age of Identic AI (BenBella, December 2025), Tapscott and Bradley lay out a taxonomy:

  1. Generative AI produces content on demand. You prompt it, it responds.
  2. Agentic AI executes tasks. You give it instructions, it carries them out.
  3. Identic AI operates as an extension of you. It understands your judgment, your priorities, your working patterns, and acts on your behalf.

These build on each other. Identic AI, or personalized AI, is a type of agent: a system that learns how you think and applies that understanding to the work. Tapscott built a prototype called "Digital Don," trained on 500 personal documents. It doesn't just answer questions; it reflects his reasoning patterns and analytical preferences.

Tapscott covers this well in a recent HBR podcast. As he put it: "Everything I described is available and it's in use."

The book raises important questions about ownership, autonomy, and risk. If your employer owns the AI agent trained on your thinking patterns, what happens when you leave? Tapscott argues for self-sovereign identity: personal AI agents as individually-owned, portable assets (an idea I have also advocated for, as in my piece on building your own, portable knowledge files). I'd recommend the book to anyone thinking about AI strategy at scale.

That said, the idea that hit me most is how Identic AI fits into how knowledge work actually gets done. The value of large organizations comes from their people: their judgment, their creativity, their contextual expertise. An AI architecture should amplify those qualities, not standardize them away. And the tools for doing this already exist.

Three Layers, Not Two

The current enterprise conversation about AI agents tends to focus on one of two dimensions: individual prompts and projects, or organization-developed agents. What personalized or Identic AI points us to is a different, more productive model.

Consider three layers:

Layer 1: Shared data and context. Centralized datastores, knowledge bases, institutional memory. Governed by role-based access controls. Everyone draws from the same authoritative sources, but not everyone sees the same data. This is information infrastructure, and most organizations already have some version of it.

Layer 2: Standardized skills and agents. Enterprise-built capabilities shared across the organization. Approved workflows, compliance checks, reporting tools, analytical frameworks. These ensure consistency where consistency matters: regulatory compliance, quality standards, institutional processes.

Layer 3: Personalized AI. User-controlled agents that interface with Layers 1 and 2, but are customized to individual context, judgment, and working style. The user decides how to apply shared resources to their specific work. The organization sets the boundaries; the individual shapes the approach within them.

This three-layer model addresses two practical problems that enterprises face right now.

The Centralization Trap

The first problem is one of the oldest patterns in IT: separating system design from end users. When an organization builds a centralized AI agent and deploys it uniformly, it makes a series of assumptions about how people work. What questions they ask. What outputs they need. What sequence of steps makes sense.

Those assumptions are often wrong. Not because the designers are incompetent, but because the people closest to the work understand what's actually needed in ways that are difficult to capture in requirements documents. This is the same dynamic that has driven up cost and driven down utility in enterprise software for decades. Centralizing AI agent design without a personalization layer repeats this pattern with a faster, more expensive technology.

The second problem is more fundamental.

The Idiosyncrasy Problem

Some work is genuinely not standardizable. Consider diplomacy. A public diplomacy officer working in Tokyo faces a fundamentally different operating environment than one working in Nairobi or Bogotá. The audiences are different. The media landscapes are different. The political constraints, cultural norms, and institutional relationships are all different. The officer's own expertise, language ability, and professional network shape what approaches are even possible.

A centralized agent can provide shared access to policy guidance, analytical tools, and reporting templates. It should. But it isn't likely to get a well-contextualized approach for specific engagement in a specific country with a specific set of stakeholders, even with access to all of the same documents. There is a strategic nuance, a sense of constraints, and a sense of opportunities that a centralized tool is unlikely to grasp. That judgment belongs to the person in the field.

This pattern isn't unique to diplomacy. Intelligence analysts, military planners, law enforcement investigators, and policy advisors all do work that is high-judgment and context-dependent. Standardized tools provide the foundation. Personalized AI provides the adaptation layer that makes those tools actually useful for the specific work at hand.

The Frameworks Already Exist

The technology for this three-layer model exists today. Developer tools like Claude Code and Cursor already implement something close to this pattern.

Shared data (Layer 1): MCP (Model Context Protocol) servers connect AI agents to organizational datastores, APIs, and knowledge bases. Access controls determine who sees what. The AI can query enterprise systems directly, within the boundaries the organization defines.

Standardized skills (Layer 2): Shared skill libraries and agent configurations give everyone access to the same enterprise-approved capabilities. An organization can build and distribute skills for common workflows, ensuring quality and consistency where it matters.

Personalization (Layer 3): Instruction files (like Claude's claude.md or Cursor's .cursorrules) let individuals customize how the AI interacts with them. These files adjust tone, priorities, analytical frameworks, and workflow preferences without changing the underlying enterprise systems. The individual shapes the interface to their own working style.

I've built my own system along these lines. My setup uses MCP connections to my personal CRM, a task management platform, and a knowledge base (Layer 1). I maintain a shared library of reusable skills for common workflows like meeting preparation, research, and structured analysis (Layer 2), which I can share with others. And I use a master instruction file that customizes how the AI works with me: my analytical preferences, my communication style, the context it should load for different types of work (Layer 3). It isn't connected to an enterprise system yet, but it's deliberately architected so that it could be. The personal layer would plug into organizational data and shared skills without requiring a rebuild.

This architecture doesn't require anyone to build a custom platform. The components exist today. What's needed is organizational clarity about how the layers work together and what boundaries apply.

Evolutionary, Not Revolutionary

There is a natural resistance to giving individuals this much control over their AI tools. The concern is reasonable: what if someone misconfigures their agent and causes problems?

Consider what we already accept. Every knowledge worker customizes their computing environment: window layouts, email filters, file organization, notification settings. We don't centralize desktop configuration because the cost of forcing uniformity exceeds the cost of allowing customization. We set security policies at the enterprise level and let individuals arrange their workspace within those boundaries. Personalized AI applies that same principle to a more powerful interface.

The risks are real, and they are different from what we have today. When someone misconfigures their email filters, the blast radius is their own inbox. When someone misconfigures a personalized AI agent that queries enterprise systems and generates analysis, the blast radius can extend to decisions made by others who rely on that output. The failure mode is not always dramatic and visible, like accidentally deleting a shared drive. It can be subtle: an agent that systematically overweights certain data sources, or that applies a framework in contexts where it doesn't fit, producing plausible-looking work that embeds errors before anyone catches them.

There is a deeper concern I've been discussing with several experts. If every analyst has a personalized agent trained on their own reasoning patterns, does the organization lose its ability to converge on shared assessments? Could personalized AI reinforce individual blind spots at scale?

This is a real question, but it isn't a new one. Organizations have always had to balance individual perspective against collective coherence. Groupthink is a well-documented failure mode. So is unchecked individualism. The challenge of finding the right balance between independent analysis and shared frameworks is an analog problem that predates AI by decades. What personalized AI does is surface this tension more clearly, because the tool makes individual reasoning patterns visible in ways they weren't before. That's an opportunity for organizations willing to be deliberate about how their people think, analyze, and work together.

The answer to the risk question is not prohibition. It is the same approach organizations use for any powerful tool: training, monitoring, and graduated access. Start with low-stakes applications. Build literacy before expanding scope. Ensure that personalized agents operate within auditable boundaries. The three-layer model handles this well. Layers 1 and 2 define what the agent can access and what enterprise standards apply. Layer 3 customization operates within those constraints, not outside them.

Getting Started

Here is how to begin.

For enterprises, three things matter immediately:

  1. Data and information access policies. Before you connect AI agents to enterprise systems, define what those agents can access, for whom, and under what conditions. Role-based access controls aren't new, but applying them to AI agent connections requires deliberate design.
  2. MCP servers and connections. The technical infrastructure for connecting AI agents to your data sources exists today. The work is in deciding which systems to expose, how to authenticate, and what monitoring to put in place.
  3. Internal rules of the road. Acceptable use policies for personalized AI need to cover what employees can and cannot customize, how personalized agents interact with enterprise systems, and what review or audit processes apply. These don't need to be exhaustive on day one. Start with clear boundaries and iterate.

For individuals, build your literacy now and start to build your own personalized context files:

You don't need to wait for the full three-layer architecture to start learning how personalized AI works. The tools are available today, and the learning curve is real but manageable. Experiment within your organization's existing policies. Use approved tools. Start with low-stakes tasks where you can evaluate the output yourself. The goal at this stage is fluency, not production deployment.

If you work in government, Getting Started with Claude Code walks through setting up a personalized AI system from scratch. The good news? Most contextual files are portable from Claude to Gemini and other systems. And if you're working with DoD's AI tools, Getting Started with GenAI.mil covers the basics on what is available and the limitations of GenAI.mil.

When the enterprise conversation catches up, the people who have already built this muscle will be the ones shaping the architecture, not waiting for it.

The Advantage Compounds

Organizations that implement this three-layer model will have people who are meaningfully more capable than those still waiting for IT to build the perfect centralized agent. The capability gap won't come from better technology. It will come from the fact that each person's AI is tuned to how they actually work, connected to the data they need, and equipped with skills that reflect both enterprise standards and individual judgment. That advantage compounds daily, because personalized agents get better with use: customizations accumulate, institutional knowledge builds in the personal layer, and the person's fluency with the tool deepens in ways a freshly deployed centralized system cannot replicate.

Identic AI represents a fundamental shift. But the shift isn't just about individual empowerment, and it isn't just about architecture. It's about building systems that treat the judgment, creativity, and expertise of individual workers as the organization's core asset, and designing technology to make those people better at what they already do well.