Welcome to the Generative World Order
As the race to develop new, more powerful generative AI platforms speeds forward, global leaders are scrambling to define the future before it defines us. The stakes? Everything from economic supremacy to the very fabric of society hangs in the balance.
While the U.S. and China duke it out for AI dominance, the potential economic windfall from widespread adoption could touch companies and continents worldwide. Energy, data access, and cutting-edge model development will determine who reaps the biggest rewards.
In an eye-opening interview with IQ, Conor Grennan—NYU Stern’s Chief AI Architect and New York Times bestselling author—dives deep into AI’s transformative global impact. From Europe’s surprising AI prowess to strategies for non-superpower countries, Mr. Grennan offers a playbook for wielding AI’s power responsibly.
Drawing on his collaborations with heavyweights like OpenAI and NASA, Mr. Grennan advises CEOs worldwide on leveraging AI’s economic benefits while circumventing ethical landmines. The future isn’t just coming, he says; it’s being shaped as we speak.
IQ: With the U.S. and China leading the AI race, what strategies should companies in other geographic markets adopt to harness AI for boosting their economies and maintaining a competitive edge?
Mr. Grennan: That’s a great question, and while there’s no definitive answer, I’ll share my thoughts. The U.S. and China are indeed at the forefront of AI, but Europe, especially Paris, is making significant strides as well. They have some impressive open-source initiatives, like Mytral. The key players in AI tend to be private companies, which raises questions about who ultimately controls this power.
So far, there’s been a focus on safety, but we haven’t faced a major incident yet that brings AI’s potential dangers into public awareness, similar to how accidents have shaped perceptions of nuclear power. The analogy isn’t to suggest AI is as dangerous as nuclear technology but rather to highlight how public perception can change dramatically following significant events
AI’s potential includes reaching what’s called Artificial General Intelligence (AGI), where AI surpasses human capabilities and potentially becomes sentient. The challenge is that companies like OpenAI and Anthropic have stated that if AI becomes too powerful, it should be regulated by the government. However, historically, it’s rare for developers to voluntarily hand over control of such technologies to the government.
There’s also a tension between fostering innovation and ensuring safety. In China, the government and companies are closely linked, not necessarily in interests, but in regulation. This is different from the U.S. and Europe, where there’s more separation between government and private companies. Europe tends to have more regulations and a culture of safety, while the U.S. often adopts a “let’s try it and see what happens” approach.
To address the question, it’s difficult to predict, but I believe we’ll face significant tensions in the next year or two. AI is already capable of imitating human behavior, video, sound, and more. The critical issue will be determining when and how governments should intervene. There’s a concern that if regulations are too strict, it could hinder innovation compared to less-regulated regions like China or other actors. This balance between innovation and regulation is complex and challenging to navigate.
IQ: How is the rapid advancement of AI technology reshaping global power dynamics—and what implications does this have for international relations between leading AI nations like the U.S. and China?
Mr. Grennan: This is a complex question but I can share some insights. Right now, certain companies may be more powerful than governments in some respects. Historically, we’ve seen economically powerful companies like Apple and Amazon, but AI companies like OpenAI and Anthropic hold a different kind of power because their technology is less transparent and more advanced.
AI development involves creating capabilities that are not immediately released to the public. Before release, these capabilities undergo rigorous testing, known as “red teaming,” to identify and mitigate potential issues. The relationship between governments and AI companies is challenging because many government officials lack a deep understanding of AI.
Unlike previous eras, where governments had experts in fields like railroads or macroeconomics, today’s AI expertise primarily resides within private companies. This forces governments to rely heavily on these companies to understand and regulate AI advancements.
I’m generally optimistic about AI, but the potential for companies to surpass governments in power is real. This is partly because there’s no clear regulatory framework yet. It’s similar to past uncertainties, like defining and regulating monopolies. AI technology evolves so rapidly that it outpaces the government’s ability to create effective regulations. Companies like OpenAI, Google, and Microsoft are trusted to some extent, but the lack of clear guidelines makes it hard to manage AI’s growth and impact.
This situation creates a “wild west” environment where smaller countries or companies could develop powerful AI models without global oversight. This lack of regulation could lead to unpredictable and potentially dangerous outcomes, making it crucial for international cooperation and comprehensive regulatory frameworks to manage AI’s impact on global power dynamics.
IQ: In what ways can AI be leveraged as a strategic tool in geopolitical maneuvering—and how should nations prepare for potential AI-driven conflicts or global power shifts?
Mr. Grennan: I believe the balance of power will remain somewhat similar to how it is now. Let’s call them the “good guys”—by which I mean North America, Europe, and other major global entities. They maintain a balance of power by preventing threats, such as terrorist attacks, through their strength. Currently, the most powerful AI models are in the hands of these “good guys.”
When companies like OpenAI, Google, Meta, and Anthropic release their powerful AI models, they put significant safety measures in place. The last thing these companies want is for something to go drastically wrong. I often train companies and speak to board-level executives and the C-suite about these issues. My advice is that the AI models you can trust generally have strong safety protocols.
On a micro level, companies need to protect their data and ensure their own security. However, on a macro level, the potential risks of AI are hard to predict. It’s a bit like the Y2K scare—if something major happens, there’s not much we can do except respond and adapt.
We have to rely on large companies to manage these risks because the government can only provide broad regulations. Overly specific regulations could hinder innovation, which is a bad idea because it gives bad actors the advantage they need. It’s crucial for governments to be thoughtful about how they regulate AI to avoid stifling beneficial advancements while still managing potential risks effectively.
IQ: You mentioned North America and Europe, which are two of the most heavily resourced geographies on the planet. How might AI adoption exacerbate or mitigate economic disparities between developed and developing nations, and what policies should be in place to ensure more equitable growth?
Mr. Grennan: Generally, technology and industrial evolution have widened economic gaps, and AI might do the same. However, great technologies often have the potential to lift all boats. For example, I’ve done work in developing countries like Nepal, and I see immense potential in basic AI tools.
Imagine remote areas without access to doctors being able to use a simple AI tool on their phones. They could take a photo of a rash or eye issue and receive expert advice. While disclaimers advise against using AI as a doctor, it can still provide very reliable assistance. The same goes for education. Previously, organizations like the Peace Corps sent volunteers to teach in remote villages. This was beneficial, but often the educational progress declined once the volunteers left. With AI tutors on phones, education can be more consistent and widespread, significantly helping to lift people out of poverty.
Regarding disparities, the top will continue to advance rapidly, but the bottom will also rise. The gap might widen, but overall progress will still be made. We’re likely to see more “solopreneurs”—individuals who can achieve a lot with minimal resources, thanks to AI. Small companies can now compete with larger ones because AI significantly augments productivity. The accessibility of AI means you don’t need specialized knowledge; you just interact with it like you would with a human.
This democratization of technology can lead to incredible startups emerging worldwide, even without significant capital investment. However, Western countries will still likely advance the fastest. While developing nations will improve, it’s unlikely to change the historical trend of industrial and technological evolution leading to some disparities.
To ensure more equitable growth, policies should focus on increasing access to AI technology in developing countries, investing in digital infrastructure, and providing education and training to use these technologies effectively. This way, we can maximize the potential of AI to benefit everyone, not just the most developed nations.
IQ: What role should international organizations play in the governance of AI technologies? Are there any specific guardrails to ensure an equitable, collaborative approach to AI development?
Mr. Grennan: Yes, guardrails are going to be critical. We need principled guardrails rather than strict, specific limits. Government officials and others should use AI to understand its capabilities fully. This is something I emphasize in my training framework, as it helps people get started with AI.
It’s important to avoid a model of regulation that pauses AI development at certain milestones. This approach is dangerous because it is too relative.
For example, in an interview about GPT-4, Sam Altman mentioned that even experts were amazed by its capabilities and thought it might be approaching AGI or artificial general intelligence. This is similar to how people were astonished by the CGI in “Jurassic Park,” only to realize later how primitive it was.
Therefore, regulation needs to be flexible and based on ongoing observations rather than fixed limits. If we set a specific technology level as a danger point, we might find that it quickly becomes outdated. It’s crucial to keep regulations at a high level and principle-based, ensuring they don’t hinder innovation. This balance will help manage AI’s growth responsibly while encouraging continued technological advancement.
IQ: On that note, it sounds like when you get to a trigger point, it’s already too late. So, for countries with private enterprises developing these technologies, is there a good way to start harmonizing ethical standards for AI adoption before reaching the point of no return?
Mr. Grennan: Yes, I think so. It probably still ties in with regulation. If I were to regulate something, I would require every company to devote a percentage of their resources, say 15%, to safety and alignment. Initially, OpenAI said they would allocate 20% of their resources to safety and alignment, which is a good example. This approach integrates safety into the development process rather than waiting for a crisis to occur.
Anthropic is leading the way in this area. They have a model called Claude, which competes with ChatGPT and emphasizes safety. The person who left OpenAI to focus on safety joined Anthropic, showing their commitment to this issue. Governments should favor and reward this approach. History shows that innovation can be both safe and appealing to people when approached correctly. We should encourage creative solutions from entrepreneurs rather than impose strict regulations like “everyone has to wear five seat belts.”
For example, safety features in cars, like headrests, were developed because they are both comfortable and safe. We need to encourage companies to invest in alignment, ensuring that AI aligns with human values. Whether it’s hiring more people or other methods, companies must prioritize this investment.
IQ: You’re very much an optimist when it comes to AI. From your point of view personally, what excites you the most in terms of what the future holds? What excites you the most?
Mr. Grennan: When I see how AI empowers the average worker, it’s incredible. This is what I spend a lot of my time doing. Watching people go from not really understanding AI to using it to improve the quality and speed of their work is phenomenal. But it’s also a tool that can improve their personal lives. For instance, all your IT issues could be resolved by taking a photo of your screen and asking for help. If you need to repair your bike, you can take a photo and get step-by-step instructions. AI can also act as an instant translator, breaking down barriers for people to travel and communicate more effectively.
Another exciting aspect is the potential for AI to provide empathetic companionship. Loneliness is a chronic issue in the United States, leading to depression, suicide, and other mental health problems. For the elderly, severe introverts, and people with disabilities, having an AI companion that they can communicate with could significantly reduce these issues. Some might argue that this could isolate people further, but those individuals are often already isolated. This technology could offer them a huge lift in a safe and healthy way.
Those who dismiss this idea often don’t face these issues or know someone who does, but for those who do, it could be life-changing. That’s what really excites me about AI.