Artificial general intelligence (AGI) refers to a hypothetical AI system that can understand, learn, and apply intelligence across any domain at or above human level — essentially, a machine that can think as flexibly as a person. Unlike today's AI, which excels at specific tasks, AGI would handle any intellectual challenge without being specifically trained for it.

Current AI is "narrow": Today's best AI systems are remarkably good at specific things. ChatGPT excels at language tasks. AlphaFold predicts protein structures. Tesla's AI drives cars. But none of them can do what the others do. ChatGPT can't fold proteins, and AlphaFold can't write poetry. Each system is a specialist, not a generalist.

AGI would be different: An AGI could learn to code in the morning, compose a symphony in the afternoon, and develop a scientific theory by evening — transferring knowledge and reasoning skills across completely unrelated domains, just as humans do. It would understand context, form abstract concepts, reason about novel situations, and learn from minimal examples.

Where we stand today: There's significant debate about how close we are. Some researchers at leading AI labs suggest we could see AGI within 5-10 years. Others argue it's decades away or may require fundamental breakthroughs we haven't made yet. Current large language models show surprising generality — they can code, write, analyze, and reason — but they still lack true understanding, persistent memory, reliable reasoning, and the ability to learn continuously from experience.

Key capabilities AGI would need:

  • Transfer learning across arbitrary domains
  • Common sense reasoning about the physical and social world
  • Ability to learn from very few examples (like humans)
  • Self-directed goal setting and planning
  • Understanding of causation, not just correlation

Why it matters for business: Even if AGI is years away, the trajectory toward more general AI is already disrupting industries. Each generation of AI models becomes more versatile. Planning for increasingly capable AI — in your workforce strategy, competitive positioning, and technology investments — is practical regardless of when true AGI arrives.

The safety question: AGI raises profound safety concerns. A system smarter than humans across all domains could be transformative or dangerous depending on how it's built and controlled. This is why AI alignment research — ensuring AI systems pursue goals aligned with human values — is one of the most important fields in technology today.