I was participating in a panel focused on the risks and ethics of AI recently when an audience member asked whether we thought Artificial General Intelligence (AGI) was something we need to fear, and, if so, on what time horizon. As I pondered this common question with fresh focus, I realized that something is nearly here that will have many of the same impacts – both good and bad.
Sure, AGI could cause big problems with movie-style evil AI taking over the world. AGI could also usher in a new era of prosperity. However, it still seems reasonably off. My epiphany was that we could experience almost all the negative and positive outcomes we associate with AGI well before AGI arrives. This blog will explain!
The “Good Enough” Principal
As technology advances, things that were once very expensive, difficult, and / or time consuming become cheap, easy, and fast. Around 12 – 15 years ago I started seeing what, at first glance, looked to be irrational technology decisions being made by companies. Those decisions, when examined more closely, were often quite rational!
Consider a company executing a benchmark to compare the speed and efficiency of various data platforms for specific tasks. Historically, a company would buy whatever won the benchmark because the need for speed still outstripped the ability of platforms to provide it. Then something odd started happening, especially with smaller companies who didn’t have the highly scaled and sophisticated needs of larger companies.
In some cases, one platform would handily, objectively win a benchmark competition – and the company would acknowledge it. Yet, a different platform that was less powerful (but also less expensive) would win the business. Why would the company accept a subpar performer? The reason was that the losing platform still performed “good enough” to meet the needs of the company. They were ok with good enough at a cheaper price instead of “even better” at a higher price. Technology evolved to make this tradeoff possible to and make a traditionally irrational decision quite rational.
Tying The “Good Enough” Principle To AGI
Let’s swing back to discussion of AGI. While I personally think we’re fairly far off from AGI, I’m not sure that matters in terms of the disruptions we face. Sure, AGI would handily outperform today’s AI models. However, we don’t need AI to be as good as a human at all things to start to have massive impacts.
The latest reasoning models such as Open AI’s o1, xAI’s Grok 3, and DeepSeek-R1 have enabled an entirely different level of problem solving and logic to be handled by AI. Are they AGI? No! Are they quite impressive? Yes! It’s easy to see another few iterations of these models becoming “human level good” at a wide range of tasks.
In the end, the models won’t have to cross the AGI line to start to have huge negative and positive impacts. Much like the platforms that crossed the “good enough” line, if AI can handle enough things, with enough speed, and with enough accuracy then they will often win the day over the objectively smarter and more advanced human competition. At that point, it will be rational to turn processes over to AI instead of keeping them with humans and we’ll see the impacts – both positive and negative. That’s Artificial Good Enough Intelligence, or AGEI!
In other words, AI does NOT have to be as capable as us or as smart as us. It just has to achieve AGEI status and perform “good enough” so that it doesn’t make sense to give humans the time to do a task a little bit better!
The Implications Of “Good Enough” AI
I have not been able to stop thinking about AGEI since it entered my mind. Perhaps we’ve been outsmarted by our own assumptions. We feel certain that AGI is a long way off and so we feel secure that we’re safe from what AGI is predicted to bring in terms of disruption. However, while we’ve been watching our backs to make sure AGI isn’t creeping up on us, something else has gotten very close to us unnoticed – Artificial Good Enough Intelligence.
I genuinely believe that for many tasks, we are only quarters to years away from AGEI. I’m not sure that governments, companies, or individual people appreciate how fast this is coming – or how to plan for it. What we can be sure of is that once something is good enough, available enough, and cheap enough, it will get widespread adoption.
AGEI adoption may radically change society’s productivity levels and provide many immense benefits. Alongside those upsides, however, is the dark underbelly that risks making humans irrelevant to many activities or even being turned upon Terminator-style by the same AI we created. I’m not suggesting we should assume a doomsday is coming, but that circumstances where a doomsday is possible are rapidly approaching and we aren’t ready. At the same time, some of the positive disruptions we anticipate could be here much sooner than we think, and we aren’t ready for that either.
If we don’t wake up and start planning, “good enough” AI could bring us much of what we’ve hoped and feared about AGI well before AGI exists. But, if we’re not ready for it, it will be a very painful and sloppy transition.
Originally posted in the Analytics Matters newsletter on LinkedIn
The post Artificial “Good Enough” Intelligence (AGEI) Is Almost Here! appeared first on Datafloq.