From GPT-3.5 to GPT7: The Arc of Hype, Reality, and What Comes Next
With GPT5 here and whispers of GPT7 on the horizon, the real question is how much change the next leap will truly bring
When GPT-3.5 emerged in late 2022, the world was caught off guard. The “chatbot” that could banter, code, and write essays at near-human fluency felt like a magic trick. Social media feeds filled with screenshots of eerily coherent poems, legal arguments, and Python scripts dashed off in seconds. Productivity bloggers predicted a “death of Google Search.” Teachers braced for a plagiarism apocalypse. Startups pivoted mid-sentence to slap “powered by GPT” on their pitch decks.
The hype was intoxicating and, in some cases, overblown. GPT-3.5 had plenty of quirks: it hallucinated facts, it struggled with long reasoning chains, and it could be easily tripped up by carefully crafted nonsense. But it kicked off an arms race.
The Hype Curve: 3.5 → 4 → 5
GPT-4 in 2023 pushed the needle further—more reasoning ability, better safety alignment, multimodality with image inputs. It became less a “toy” and more a reliable cognitive co-pilot for professionals. Law firms quietly ran GPT-4 on case analysis. Engineers used it for code review. Doctors began experimenting with diagnostic support, carefully and under supervision.
GPT-5, freshly released in 2025, feels like a qualitative leap. It’s not just bigger, it’s more aware of context over long conversations (256k tokens for input), it juggles cross-disciplinary reasoning better, and it integrates seamlessly into workflows. Instead of jumping between search engines, email clients, and design tools, GPT-5 is simply there, a persistent layer that quietly coordinates everything.
Still, with every leap, there’s a gap between what the model can do and what we can responsibly deploy. Regulators are playing catch-up. Society is still figuring out where the human ends and the AI begins. And all this discussion is the basis of our book Building Creative Machines.
Beyond Incrementalism: Projecting GPT-7
If GPT-6 (likely arriving in 2026 or 2027) will be a refinement—more accurate, more interpretable, less “hallucinatory”—then GPT-7, possibly around 2027–2028, could be the first model that feels like a specialist-generalist hybrid (true AGI):
Persistent Personal Memory
Not just a better chatbot, but a lifelong collaborator that remembers years of interaction, with context filters and “memory views” you can audit. Think of it as a personal research assistant who knows your writing style, your preferences, and your historical decisions—and can justify why it suggests what it does.Embedded Domain Modules
Instead of training one giant model on all data, GPT-7 could house modular expert cores, specialised AI “organs” for medicine, law, engineering, and creative arts, that it can invoke and coordinate on demand. The result: depth and breadth.Self-Verification Loops
One of the most significant weaknesses of GPT-3.5–5 is the “illusion of truth.” GPT-7 might run internal simulation checks—multiple, independent reasoning passes—before giving you an answer, surfacing not just conclusions but confidence intervals and reasoning trees.Multimodal Compositionality
Where GPT-4–5 can describe an image or draft a diagram, GPT-7 could reason across modalities in real time: edit a schematic while narrating trade-offs, compose a soundtrack while reading your novel draft, or simulate a business pitch with generated voices and avatars.Normative Alignment Layers
Not just safety “filters,” but transparent ethical frameworks you can select or customise, open-source moral compasses that make explicit the trade-offs in how the AI prioritises fairness, privacy, or efficiency.
Then GPT-7, possibly around 2027–2028, could be the first model that feels like a specialist-generalist hybrid. True AGI
The Reality Check
By 2028 (my wild guess), the conversation won’t just be about “what GPT can do” but “how society reshapes itself to work with GPTs.” We’ll face hard questions: Who owns the intellectual property of an AI that co-authors? How do you prevent economic over-dependence on a single cognitive infrastructure? What happens when your lifelong AI assistant gets subpoenaed?
If GPT-3.5 was the spark, and GPT-5 is the scaffolding, GPT-7 could be the moment the building is inhabited, and we begin to see how living with a thinking, remembering, reasoning machine changes us.
And, just like with GPT-3.5, the hype will be deafening. The difference, hopefully, is that this time we’ll be ready.
P.S.
While the AI community will no doubt celebrate the arrival of GPT-6, its relevance may be more muted than its predecessors. Much like the evolutionary “S” curve in technology adoption, there comes a point where gains are less about dazzling new tricks and more about refining what’s already possible. GPT-6 is likely to focus on incremental upgrades, tighter reasoning, reduced hallucination rates, and better interpretability, rather than fundamentally altering how people use AI. In that sense, it may feel more like a stability patch than a cultural moment, a necessary bridge to the more transformative leap we might see in GPT-7.
Of course, this is only my personal view, shaped by my own experience, conversations in the field, and what I have been reading. I share it aware that the reality of technological change often surprises even those of us who follow it closely.