The activity builds on recent advances. Last year, the field of generative artificial intelligence — where software creates text or images based on descriptions — became eerily human and entered the mainstream.
Chatbots seemed lifelike and brought worries of misuse. Image-makers churned out high-quality pictures that garnered a flurry of controversy and attention. The war in Ukraine spurred new and problematic uses of artificial intelligence on the battlefield.
The buzz has sparked a huge push to make the next product that enamors the public. In 2022, venture capitalists poured roughly $1.37 billion into generative AI start-ups, almost as much as was invested in all of the previous five years combined, according to PitchBook data.
Still, computer ethicists said, there’s no indication that AI’s proven racist and sexist tendencies will be solved in 2023 — with profitability probably trumping ethics. Robots may not replace humans on the assembly line or get any closer to being fully human, but it’ll be grueling for people left working alongside them. The courts and regulatory bodies could start establishing guardrails on how AI can be used.
“This is just like a gold rush,” said Russ Altman, associate director of Stanford’s Institute for Human-Centered AI. “We’re going to continue to see things that are really cool and clever, but they won’t be perfectly well-thought-out with respect both to the business model and to the potential long-term damages or impacts on society.”
Here’s a few areas of artificial intelligence to watch this year.
The breakthroughs were the result of years-long research in the field of generative artificial intelligence — where software creates content like texts or images based on descriptions — and came due to advances in math, computing power and new ways to train software.
This year, multiple AI experts said, people will probably see more of these public-facing products come out. AI companies could also move on from mimicking human language through text into speech, trying to build bots that could be marketed as smarter helplines or virtual assistants, they said.
Altman said part of this could be because the labs that create these products, such as the Elon Musk-backed OpenAI, can benefit from generating buzz so that large corporations try to license their technology and create in-house AI products for their own customers. “That’s their business plan,” Altman said of labs such as OpenAI.
Still, any attempts to release products widely will run headlong into issues — namely that chatbots are still wrong, racist and sexist at times — and require new training methods.
But it will take years to go from how they are currently trained, which is by ingesting large troves of text and using patterns to predict what word comes next, into a way he says is better: Instead of simply predicting what word comes next based on probability and patterns, teach bots to discern if those words are true based on data sets that are higher quality and from trusted sources. That would help avoid racism and sexism and be more accurate, he said.
“You have to teach it how to figure out truth and untruth,” Altman said. “And let me just say, that is profound because we know there are humans who don’t do this very well.”
In 2022, the battle between robots and humans reached a turning point. Companies like Amazon and FedEx built warehouse robots that were able to finally pick things up with humanlike finesse, a years-long challenge solved largely because of AI vision systems that could see and analyze objects better. (Amazon founder Jeff Bezos owns The Washington Post.)
Several AI experts said companies will try to build upon those advances this year and create vision systems that not only view static objects better, but those that are in motion, helping expand what they can do on the factory floor.
Sebastian Scherer, an associate research professor at Carnegie Mellon University’s Robotics Institute, said these advances will help advance the field of single-task robots more than those aimed at doing a variety of tasks, such as universal or humanoid robots. Humanoid robots are often touted by the likes of Musk as being close to fruition.
“This is maybe the starting point,” Scherer said. “[The] whole process will take five to 10 years.”
The war in Ukraine has spurred uses for AI that drew intrigue and controversy. Proponents said battlefield AI software helped Ukrainian soldiers make real-time decisions in war. Detractors noted the use of controversial, AI-fueled facial recognition technology to enable psychological warfare tactics.
Margarita Konaev, deputy director of analysis at Georgetown University’s Center for Security and Emerging Technology, said this year will bring more use of artificial intelligence software in war, particularly for software that helps soldiers recognize objects and their location. Leaders will probably use it more for decision-making in battlefield operations, equipment maintenance and supply chain management.
She added that these models could actually perform better, because they will have so much data generated from the war in Ukraine last year to help feed and train them.
Artificial intelligence may get some guardrails in 2023.
The European Union is working on creating the world’s first standards on regulating and banning uses of artificial intelligence. It could help define standards for the rest of the world, experts said, and put guardrails around how the government can use the software to dole out citizen services or notify people they are interacting with a computer and not a person.
Domestically, Microsoft-owned GitHub’s tool Copilot, which translates basic human instructions into functional computer code, is at the center of a lawsuit that could have broader implications for the way AI models, such as ChatGPT or Dall-E 2, are trained.
But if other AI related lawsuits pop up in courts across the country, it could create difficulties in regulation, experts said.
“There’ll be inconsistencies across the country,” Stanford’s Altman said. “It’s going to be very patchwork.”