ABBYY
Back to The Intelligent Enterprise

AI, Attention, and Why More Compute Is Not Always the Answer

by Matt Wood, Head of Cyber Security Operations
If we get the question right, keep the line, and stay honest about what we do and do not know, then more compute stops being a risk multiplier and becomes what it should have been all along: an accelerator in service of human intent.

People keep talking as if the next jump in AI is a hardware problem. Faster chips, bigger models, more data. The story goes that, once we have enough compute, the system will hand us cures for disease, perfect policy, and better everything.

It does not work like that. More compute gives you more pattern, more correlation, more speed. It does not give you meaning. Meaning lives in the question you ask and the frame you bring, not in the silicon.

I have lived that shift personally over the last few years. I stopped thinking in terms of time spent and started thinking in terms of attention and bandwidth. The change did not come from a course or a hack. It came from changing how I think. I stopped treating life as isolated stories and started treating it as data and patterns. I stopped asking what something is and started asking what it cannot be, given what I already know. That is the work. There is no shortcut for it.

We have known this in other domains for years. If you do not grasp the concept, the thing can be right in front of you, and your mind still will not land on it. You do not just need eyes; you need a map. It’s the same joke underlying The Hitchhiker’s Guide to the Galaxy. Forty-two is useless if you never understood the question in the first place.

We are now repeating that mistake with AI. From my side of the fence in cyber security, you can see it playing out across sectors. Public incidents, regulator reports, and industry stories all point in the same direction. People ask questions they do not fully understand, to systems whose inner workings they do not understand, and then treat whatever comes back as settled truth. Sometimes that is harmless. Sometimes it is useful by luck. Sometimes it is locally correct inside the model’s world and globally dangerous in the real one.

  • Ask a system to maximize engagement, and it will push people to the edges.
  • Ask it to optimize cost, and it will move risk to parts of the map you are not watching.
  • Ask for a cure, and you can get an answer that looks perfect in simulation but fails on the messy edge cases that never made it into the data.

If all we do is scale the models without upgrading the human side, we will just get faster and more confident mistakes.

Three pillars for safe AI

The real work is upgrading both halves of the loop: How we form the question. How we read the answer. How we update our view of what can be true, and what must not be allowed.

We cannot treat AI as an oracle that hands down truths. It is a pattern engine running inside human boundaries. If we are going to use it at scale, we must do three things, at once:

  1. Be precise about what we are asking it to optimize.
  2. Be explicit about the lines it must not cross, however clever the output looks.
  3. Actually understand what any given answer is saying, why it landed there, and what assumptions and gaps sit underneath it.

The dangerous future is multitudes of frightened or greedy people throwing vague questions at systems they do not understand and treating the answer as a mandate. The sane future is building and trusting people who can sit between raw power and human consequence and keep the line. People who treat AI as pattern and possibility—not as a god—and who refuse to outsource responsibility for the question itself.

There is no Tower of Babel coming. There is no chip that will save us from having to think. There will be accelerators like AlphaFold, where a tight question, good data, and heavy compute jump us forward in a specific area. But even then, humans still have to frame the problem, interpret the output, and carry the consequences. If we get that part wrong, bigger models will not make us wiser. They will just let us be wrong at scale.

The hopeful part is that this is not mysterious. The fix is not a miracle chip. It is governance, literacy, and discipline.

Advancing AI through trust, education, and the right people

This is why I am proud of the direction ABBYY is already moving in. We have taken a stance that AI risk has to be operational, evidenced, and auditable, not something you claim in a slide deck. The work with ForHumanity, including ABBYY’s AI Risk Management Policy aligned to the EU AI Act and the use of an AI risk register, is the right shape of thinking because it forces the right behavior upstream. You have to state what you are optimizing for. You have to draw the lines. You have to keep records that survive scrutiny. Trust stops being a brand word and becomes something you can demonstrate.

The other half is people. If AI is going to sit inside real organizations, then basic AI literacy cannot live only with specialists. More people need enough of a map to spot when the question is wrong, when the incentive is mis-set, when the output is overconfident, and when a “helpful” answer is quietly moving risk somewhere nobody is watching. That is why internal capability building matters, and why practical education, like the AI foundations material being published through ABBYY University, has a role alongside governance.

This is the frame behind The Alignment Trilogy, a collection of articles I’ve written on Medium. Not to slow progress, and not to turn AI into a compliance theatre, but to keep the human side of the loop strong enough to carry the power we are building. If we as a group get the question right, keep the line, and stay honest about what we do and do not know, then more compute stops being a risk multiplier and becomes what it should have been all along: an accelerator in service of human intent.

Subscribe for updates

Get updated on the latest insights and perspectives for business & technology leaders

Loading...
Follow ABBYY
Tag a friend