Insights

AI Is Powerful — But It Works Best With a Human Brain Switched On

  • Date 05 May 2026
  • Filed under Insights
Share

Gareth Edwards

Written by


Gareth Edwards

General Manager | Salesforce

 

Artificial intelligence has officially moved from shiny new toy to serious business tool. In the enterprise, platforms like Salesforce Agentforce now deploy autonomous AI agents that can reason, plan, and act across real business systems. They qualify leads, resolve cases, and execute processes using customer data, workflows and business rules that already exist inside the organisation.

That’s a very different proposition to public‑facing AI tools trained on the open internet. And it’s an important distinction.

At NRI, we work with organisations that are moving beyond AI experimentation into scaled, enterprise deployment. What we see consistently is this: once AI is grounded in contextualised data and business logic, the risk profile changes dramatically. But the need for disciplined human thinking does not go away — it becomes more important.

The need for disciplined human thinking does not go away — it becomes more important.

The real promise of AI isn’t about replacing humans. It’s about amplifying them. And that amplification cuts both ways. AI can make good thinking faster and more consistent. It can also scale flawed assumptions with impressive efficiency — even inside well‑governed enterprise platforms.

 

Public AI vs Enterprise AI

Public AI tools like ChatGPT or consumer versions of Copilot operate without deep knowledge of your customers, your processes, or your operating constraints. When they’re wrong, they’re often wrong in obvious ways: hallucinated facts, invented references, confident nonsense. Those failures are visible — and usually harmless.

Enterprise AI behaves differently.

Agentforce, for example, is grounded in customer data, permissions, workflows, and business rules through Salesforce’s Einstein Trust Layer. That grounding makes it far less likely to fabricate answers in the way public models sometimes do. It knows what a customer is, which cases are open, what actions are allowed, and how success is defined.

This is a huge advantage. It’s also where the risk subtly shifts.

 

When errors become systemic

Once AI is operating inside real business context, errors are less likely to be random – and more likely to be systematic. If the underlying logic, data, or assumptions are flawed, the AI will execute those flaws faithfully and at scale.

Large language models and agentic systems are, by design, cooperative. They don’t challenge intent unless explicitly designed to do so. They optimise toward completing tasks, following rules and producing outputs that look sensible within the context they’re given.

Large language models and agentic systems are, by design, cooperative. They don’t challenge intent unless explicitly designed to do so.

Think of enterprise AI less like an unpredictable oracle and more like an extremely efficient employee who follows instructions precisely. If the instructions are wrong, incomplete, or poorly framed, they won’t argue. They’ll just execute beautifully.

 

Why guardrails are necessary — but not sufficient

Salesforce understands this dynamic, which is why Agentforce is built with grounding, permission models, audit trails, and governance controls. In enterprise environments, AI shouldn’t be freelancing with data or inventing business logic. Guardrails matter a lot.

But even the strongest guardrails don’t remove the need for someone to think clearly about what’s being encoded inside them.

Every AI system reflects a set of choices. Someone decides what data is authoritative. Someone defines the logic that determines next best actions. Someone chooses which exceptions matter and which can be ignored. AI doesn’t remove judgement from the system, it operationalises it.

In our work, the organisations getting real value from AI aren’t asking, “What else can this agent automate?” They’re asking harder questions:

  • What assumptions are embedded in our processes?
  • Where do we expect humans to review, override, or challenge AI decisions?
  • How will we detect when the AI is confidently doing the wrong thing — consistently?

These are not technical questions. They are leadership questions.

 

The unexpected value of non-technical thinking

This is where certain skills, long considered non‑technical, suddenly become critical. Disciplines like History, Law, Philosophy, and the social sciences train people to examine sources, test reasoning, and uncover hidden assumptions. A history graduate doesn’t read a document and ask, “Does this sound right?” They ask, “Who produced this, under what conditions, and for what purpose?”

That mindset translates directly to working well with enterprise AI.

Because when AI is grounded in your data and logic, its outputs often sound reasonable. They align with policy. They follow process. They feel correct. And that’s precisely why weak assumptions are more dangerous in enterprise systems than in public tools.

Weak assumptions are more dangerous in enterprise systems than in public tools.

Well‑prepared organisations don’t treat AI outputs as final answers. They treat them as starting points. They stress‑test them. They ask for alternatives. They design escalation paths. They expect humans to intervene – not as a failure mode, but as part of the system.

 

Diverse perspectives as a defence

This is also why diversity of thought becomes a practical control, not a cultural aspiration. As AI systems standardise execution, human judgement becomes the main source of variation. Different perspectives catch different risks. Legal teams see exposure. Social scientists notice unintended impacts. Business leaders question incentives. Technologists ensure it all runs reliably.

Remove that diversity, and your AI becomes efficient, compliant — and quietly fragile.

So yes, AI is powerful. Agentforce and similar platforms represent a major step forward precisely because they are grounded in real data and real business logic. But the organisations that succeed won’t be the ones that assume grounding removes the need for thinking. They’ll be the ones that recognise it raises the stakes.

AI doesn’t replace human judgement. It industrialises it. At NRI, helping organisations design AI that is not just intelligent, but thoughtfully governed and contextually grounded, is where the real work now is.