Artificial Intelligence (AI) has been on fire ever since OpenAI released ChatGPT almost two years ago. Funded by a never-ending flow of money and powered by huge new data centers, new models are smashing benchmark after benchmark. Meanwhile, AlphaFold has predicted millions of intricate protein structures and AphaChip is designing the next generation of AI chips. Just when everyone thought we might be hitting diminishing returns on investments in AI, OpenAI released its impressive o1 preview reasoning model.
AI is getting scary good at many things, sometimes even better than the best humans in the world.
While some experts predict the technology could spin out of control fast if we don’t slow things down until we really understand how it works, others are less concerned and want to usher in a new era allowing humans to kick back, living a life of leisure and luxury supported by universal basic income as soon as possible.
Alignment, the problem of ensuring that AI is acting in accordance with human values and goals, may well be one of the most important unsolved tech challenges of our time. However, another problem, the Big Artificial Intelligence Idea Gap, could turn out to be equally consequential: Responsible use of highly productive AI requires novel ideas, and humanity isn’t nearly as good at generating them as we like to think.
The Idea-Execution Gap
As long as we have been able to think, a relatively small group of people—scholars, inventors, visionaries and creatives of all kinds—have been the ones coming up with the big, transformative ideas, such as the printing press, the internet or the mobile phone. There are thousands of these groundbreaking ideas. But not as many as you could expect from several billion intelligent beings.
While we all like to think we’re pretty good at original, system-two thinking, we spend most of our time in execution mode—solving problems within the bounds of one of these ideas relying on our system-one thinking and leveraging existing playbooks and knowledge, whether that’s selling your company’s flagship product, negotiating a new supplier contract or training a new AI model.
Most enterprises operate on an idea to execution ratio of 1:100 to 1:1,000. That is, statistically speaking, for every one person dreaming up new things, there are 100 to 1,000 people whose job is to execute these ideas. The New York Times produces a handful of publication formats with 6,000 employees. Nestlé owns 2,000 brands and employs 270,000 people. SpaceX developed four launch vehicles with 13,000 employees.
But most experts agree that most of these well-defined, well-documented execution tasks can soon be handled by AI. A McKinsey report estimates that half of today’s worker tasks could easily be automated by AI. Future progress in AI may allow us to decrease the idea to execution ratio from 1:1,000 to 1:10 and eventually down to 1:1. OpenAI calls this level five intelligence, which can perform the work of entire organizations.
Just Imagine a world where everyone has the tools to bring their ideas to life. Your personal movies and music, your custom-generated car, your one-person manufacturing business bringing in millions of dollars every month. While this may still take some time, SAP is working hard to bring a wide range of advanced AI innovations to our customers as soon as possible, including collaborative AI business agents capable of increasing business productivity to a whole new level.
This may be great news for people who know what to do with the technology, but it requires huge amounts of creativity that we aren’t used to deploying towards meaningful big problems today. For most of us it’s scary to sit in front of a blank sheet of paper—or a powerful AI—and decide what to build next.
What Could Possibly Go Wrong?
It’s tempting to think we could just ask the AI what to build next. Just one more problem to solve for the AI, what could possibly go wrong? Unfortunately, there is a big catch to this hands-off, AI-led approach.
First, there’s a risk that AI will only reproduce the patterns from the past it has been trained on. While such a future might be relatively safe and fun for a while, it would get boring very quickly. The second risk is a lack of diversity. If you ask ChatGPT to tell you a joke ten times over, you will get ten very similar two-line puns. That’s like tuning in to a music stream where all songs sound the same. Ten different people, in contrast, will likely tell you ten very different jokes.
But suppose we can overcome these limitations. How will the AI know what we want if we don’t tell it? What happens if we give up this control can be seen on social media, where we have allowed advanced machine learning systems to build virtual worlds for us based on our interactions without much oversight. These worlds are full of fake news, hate speech, conspiracy theories and dopamine-driven scroll loops.
It’s not too hard to picture a super-intelligent AI leading us down the wrong path, taking this dynamic to a whole new game. Instead of recommending the next reel, it could steer entire industries, determine politics, and silently nudge us towards dystopian futures we might never want to live in.
How to Stay in Control
The good news is that we don’t have to sit back and passively watch this play out. We can still take action, even though the window of opportunity is getting smaller with each AI update.
AI Alignment Matters, But So Does Vision
Aligning AI with human values is absolutely critical. But Alignment is pointless without vision and guidance. Every increase in AI capability requires more capacity to envision what to do with the added productivity. That requires all of us to develop new ideas for a compelling future of our society. SAP develops Responsible AI with an emphasis on human oversight and agency to enable this human-led approach.
Rewire Education for Creativity
To enable more and more people to develop these ideas, however, we need to emphasize humanistic aspects of education and professional development, next to technology and science. Today’s curricula are overly focused on applying patterns and templates, the very things AI will do for us in the future. We need to build new curricula focused on independence, creativity and deep thinking.
Start Practicing Today
If more than a decade in innovation have taught me anything, it’s that ideas don’t magically appear out of nowhere and that there is no process for generating them. We need to start practicing these new thinking skills before it’s too late. This is going to be difficult and frustrating at first, but it will be well worth the effort. Our BTP Innovation team is constantly exploring cutting-edge AI technologies together with customers to jointly identify groundbreaking new ideas.
Of course, all this can only be a starting point. AI will change our world in fundamental ways that we don’t fully understand as of today, but if we steer AI’s incredible productivity gains with a strong vision, we can ultimately build a much better world full of abundance and prosperity for everyone.
link