I'm obsessed with the idea that human progress could be accelerated - if only we realised how to properly combine existing technology.
I don't want to go "Ancient Aliens" here - but even a cursory reading of scientific history will show you were humanity's progress could have been dramatically fast if only knowledge was more widely shared and recognised. The book "How To Invent Everything" makes a compelling case that the material conditions for scientific development often existed long before they actually came to fruition. Sure, you need economically viable sources of lightweight metal and rubber to invent the bicycle - but the basic design a engineering knowledge was available centuries before it became popular1. But that knowledge was fragmented and hard to find.
Similarly, the concept of binary mathematics was invented by Thomas Harriot nearly a century before Leibniz independently created it. Leibniz had no way of knowing about Harriot's work. And, even if the Internet had been around in the 1600s2, the volume of scientific papers would have been more than any one person could have read in a lifetime. Even if they were a polyglot.
This is where, I hope, AI will solve all out problems3.
The current crop of "stochastic parrots" are pretty good at analysing a given text and producing a reasonable summary of it. But when you ask them to compare one paper to another, they seem restricted. They can produce text that sounds like a comparison - based on having ingested a million student essays - but they're not really analysing anything.
I want you to imagine a Sudoku puzzle. A blank grid with the occasional number thrown in. By understanding the rules of the grid, we can determine what the rest of the numbers should be. Using some rather basic logic, it's possible to reconstruct incredibly complex information from just a few starting points.
Don't believe me? I think the "Miracle Sudoku" illustrates my analogy nicely.
Now, imagine that the grid isn't filled with numbers. It is filled with science. Oh, and the grid is a multidimensional representation of every scientific theorem known. And we only know a few of the rules. And they seem contradictory.
What happens if an AI reads every scientific paper, understands how they link together, and can figure out where our knowledge has gaps?
Could a future AI fill in the gaps in our knowledge from first principles?
Could it say "This forgotten paper from 1754 contains the answer to a question first raised in 1986"?
Could it retroactively replace the inaccurate facts it has learned with the truths it has discovered?
Could it structure its own scientific revolution by understanding which paradigms are false and having the supreme breadth of knowledge to shift them?
This isn't about "the singularity" - where the raw intelligence of a machine surpasses ours and enters a positive feedback loop - it's more subtle than that.
Right now, languishing in an archive is a paper by some long forgotten cleric which solved a puzzle of no practical purpose to the experts of the day. It just so happens that paper contains the insight needed to bypass the Shannon Limit. Or there's the undergraduate thesis which received a failing grade - but happens to prove P ≠ NP. Or...
In retrospect, these "discoveries" will seem obvious. We'll curse ourselves for not having taken advantage of them hundreds of years ago.
Perhaps I'm wrong. Perhaps knowledge graphs aren't computable in that sense. And maybe there are no obvious holes in our understanding of the world. And maybe AI will fatally misunderstand "The Endochronic Properties of Resublimated Thiotimoline" and descend down a path from which it can never recover.
But my hunch is that some successor to the crop of LLMs will be able to not just regurgitate what it has read, but point out the "obvious" things we're missing.