This article is part of our latest Artificial Intelligence special report, which focuses on how the technology continues to evolve and affect our lives.
Steve Jobs once described personal computing as a “bicycle for the mind.”
His idea that computers can be used as “intelligence amplifiers” that offer an important boost for human creativity is now being given an immediate test in the face of the coronavirus.
In March, a group of artificial intelligence research groups and the National Library of Medicine announced that they had organized the world’s scientific research papers about the virus so the documents, more than 44,000 articles, could be explored in new ways using a machine-learning program designed to help scientists see patterns and find relationships to aid research.
“This is a chance for artificial intelligence,” said Oren Etzioni, the chief executive of the Allen Institute for Artificial Intelligence, a nonprofit research laboratory that was founded in 2014 by Paul Allen, the Microsoft co-founder.
“There has long been a dream of using A.I. to help with scientific discovery, and now the question is, can we do that?”
The new advances in software applications that process human language lie at the heart of a long-running debate over whether computer technologies such as artificial intelligence will enhance or even begin to substitute for human creativity.
The programs are in effect artificial intelligence Swiss Army knives that can be repurposed for a host of different practical applications, ranging from writing articles, books and poetry to composing music, language translation and scientific discovery.
In addition to raising questions about whether machines will be able to think creatively, the software has touched off a wave of experimentation and has also raised questions about new challenges to intellectual property laws and concerns about whether they might be misused for spam, disinformation and fraud.
The Allen Institute program, Semantic Scholar, began in 2015. It is an early example of this new class of software that uses machine-learning techniques to extract meaning from and identify connections between scientific papers, helping researchers more quickly gain in-depth understanding.
Since then, there has been a rapid set of advances based on new language process techniques leading a variety of technology firms and research groups to introduce competing programs known as language models, each more powerful than the next.
What has been in effect an A.I. arms race reached a high point in February, when Microsoft introduced Turing-NLG (natural language generation), named after the British mathematician and computing pioneer Alan Turing. The machine-learning behemoth consists of 17 billion parameters, or “weights,” which are numbers that are arrived at after the program was trained on an immense library of human-written texts, effectively more than all the written material available on the internet.
As a result, significant claims have been made for the capability of language models, including the ability to write plausible-sounding sentences and paragraphs, as well as draw and paint and hold a believable conversation with a human.
“Where we’ve seen the most interesting applications has really been in the creative space,” said Ashley Pilipiszyn, a technical director at OpenAI, an independent research group based in San Francisco that was founded as a nonprofit research organization to develop socially beneficial artificial intelligence-based technology and later established a for-profit corporation.
Early last year, the group announced a language model called GPT-2 (generative pretrained transformer), but initially did not release it publicly, saying it was concerned about potential misuse in creating disinformation. But near the end of the year, the program was made widely available.
“Everyone has innate creative capabilities, she said, “and this is a tool that helps push those boundaries even further.”
Hector Postigo, an associate professor at the Klein College of Media and Communication at Temple University, began experimenting with GPT-2 shortly after it was released. His first idea was to train the program to automatically write a simple policy statement about ethics policies for A.I. systems.
After “fine-tuning” GPT-2 with a large collection of human-written articles, position papers, and laws collected in 2019 on A.I., big data and algorithms, he seeded the program with a single sentence: “Algorithmic decision-making can pose dangers to human rights.”
The program created a short essay that began, “Decision systems that assume predictability about human behavior can be prone to error. These are the errors of a data-driven society.” It concluded, “Recognizing these issues will ensure that we are able to use the tools that humanity has entrusted to us to address the most pressing rights and security challenges of our time.”
Mr. Postigo said the new generation of tools would transform the way people create as authors.
“We already use autocomplete all the time,” he said. “The cat is already out of the bag.”
Since his first experiment, he has trained GPT-2 to compose classical music and write poetry and rap lyrics.
That poses the question of whether the programs are genuinely creative. And if they are able to create works of art that are indistinguishable from human works, will they devalue those created by humans?
A.I. researchers who have worked in the field for decades said that it was important to realize that the programs were simply assistive and that they were not creating artistic works or making other intellectual achievements independently.
The early signs are that the new tools will be quickly embraced. The Semantic Scholar coronavirus webpage was viewed more than 100,000 times in the first three days it was available, Dr. Etzioni said. Researchers at Google Health, Johns Hopkins University, the Mayo Clinic, the University of Notre Dame, Hewlett Packard Labs and IBM Research are using the service, among others.
Jerry Kaplan, an artificial-intelligence researcher who was involved with two of Silicon Valley’s first A.I. companies, Symantec and Teknowledge during the 1980s, pointed out that the new language modeling software was actually just a new type of database retrieval technology, rather than an advance toward any kind of “thinking machine.”
“Creativity is still entirely on the human side,” he said. “All this particular tool is doing is making it possible to get insights that would otherwise take years of study.”
Although that may be true, philosophers have begun to wonder whether these new tools will permanently change human creativity.
Brian Smith, a philosopher and a professor of artificial intelligence at the University of Toronto, noted that although students are still taught how to do long division by hand, calculators now are universally used for the task.
We once used rooms full of human computers to do these tasks manually, he said, noting that nobody would want to return to that era.
In the future, however, it is possible that these new tools will begin to take over much of what we consider creative tasks such as writing, composing and other artistic ventures.
“What we have to decide is, what is at the heart of our humanity that is worth preserving,” he said.