Generative AI: What You Need to Know
A January 2023 assessment of artificial intelligence (AI) investments across all sectors predict that the total spending on AI systems this year is forecasted to reach nearly $100 billion (USD), up from $37.5 billion in 2019.
Within that prediction lays one of the most promising AI sub-segments to recently emerge: generative artificial intelligence.
More than merely a buzzword for technologists, investors, policymakers and executives worldwide, generative AI describes unsupervised learning algorithms (such as ChatGPT) that can create new digital images, video, audio, text, code, simulations and much more.
Of particular interest to executives across a variety of sectors is the potential for generative networks to perform many economically-useful tasks as well, or better, than humans can.
According to an assessment recently published by the World Economic Forum, “Optimists claim that generative AI will aid the creative process of artists and designers, as existing tasks will be augmented by AI systems, speeding up the ideation and, essentially, the creation phase,” say authors Benjamin Larsen and Jayant Narayan, co-project leads for AI and Machine Learning at the World Economic Forum.
The authors also hint at what could be a stratospheric market cap within this sub-segment, noting that two generative AI companies—Stability AI and Jasper—recently raised over $200 million collectively, and blue-chip firms like Sequoia Capital view generative AI as an application that could “generate trillions of dollars in economic value.”
What are the benefits?
According to Murf.Ai, a highly-lauded, early-stage generative AI start-up serving customers in over 100 countries, potential use cases for enterprises far exceed the realm of content creation. They include:
- Reduced Financial and Reputational Risks: Generative AI could quickly detect malicious or suspicious activities using predefined algorithms and rules, thus preventing damage to businesses or individuals.
- Identity Protection: A potential resource for people who would prefer not to disclose their identities while working online or remotely. It does this by creating photo and video realistic avatars, thus concealing the true identity of the real person.
- Unbiased Modeling: Generative AI modeling could help machine learning applications comprehend abstract concepts without interjecting bias, in both simulations and the real world.
Additionally, says Jasper CEO Dave Rogenmoser, generative AI “is no flash in the pan.” “This is here to stay,” Rogenmoser exclaimed in an October 2022 interview with Venture Beat, just one week after Jasper raised $125 million in funding. “It’s going to get radically better, even in the next six to twelve months, and it’s going to impact every tool out there.”
Rogenmoser advises organizations to formulate their plans of attack now on how they intend to deploy generative AI applications throughout their organizations, due to the sizable productivity gains to be had.
What are the risks?
At their core, many generative AI systems have been trained on large language models—the majority of which is taken from human-written text across the Internet. Within such a nebulous environment, several issues can occur, such as:
- Difficult to Control: As with any AI application, generative AI models don’t always generate the intended results. Current models can sometimes be unstable, and it can be difficult to control their behavior.
- Security Concerns: Much in the same way Deep Fake technology can be used maliciously, generative AI could be employed to create fake news or commit fraudulent activities, including targeting others’ medical or financial information.
- “Pseudo Imagination”: Generative AI algorithms aren’t necessarily point-and-click simple tasks to engineer, and they require vast amounts of data input to perform desired outputs.
One such example, according to the World Economic Forum, is Meta’s Galactica—a model trained on nearly 50 million science articles to summarize academic reports and write scientific code—was deactivated after three days. The decision came after the scientific community discovered incorrect results after misconstrued scientific facts.
According to the Forum’s report, “Despite substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human feedback (RLHF), models can still generate toxic and biased outputs.”
Furthermore, in an interview with Fortune at the Web Summit 2022 in Lisbon, Portugal, MIT Professor of Linguistics Emeritus, Noam Chomsky, noted that today’s computer systems need “astronomical amounts of data, yet they still do not understand language at all.” Instead, Chomsky suggested that generative models merely predict the most statistically-likely associations based on user inputs.
According to Fortune’s Jeremy Kahn, “[This] is an important reminder for business: software can still be very useful—and make you a lot of money—even if it doesn’t function at all like a human brain would.”
What’s next?
Current trends suggest that generative AI stands to revolutionize how executives and enterprises approach and execute tasks—from the mundane to the extraordinary.
However, as is often the case within AI, a moral grey area exists. Experts call for greater centralization and control with firmly established ethical boundaries for generative AI use cases.
Growing pains aside, as more companies rush to incorporate elements of generative models, Rogenmoser says the writing is on the wall: “I really think any companies that put their head in the sand are going to miss out.”
According to Lauren Woodman, CEO of DataKind, “As leaders, we have the responsibility to ensure that AI is designed, developed, and deployed to benefit everyone, not just a few,” said Woodman at the Davos World Economic Forum 2023. “That’s why we need to ensure that AI is developed in a way that is transparent, explainable, and ethical.”