The artificial intelligence that now captures the world’s attention and consumes vast amounts of computing power and electricity is based on a technology called deep learning. In deep learning, linear algebra (especially matrix multiplication) and statistics are used to extract and learn patterns from large data sets during training. Large language models (LLMs) like Google’s Gemini or OpenAI’s GPT have been trained on large amounts of text, images, and videos and have developed many capabilities, including “emerging” capabilities that they were not explicitly trained on (promising implications, but also worrying ones) Today, more specialized, domain-specific versions of such models apply to images, music, robotics, genomics, medicine, climate, weather, software coding, and more.
beyond human understanding
Rapid progress in the field has led to predictions that artificial intelligence is “taking over drug development,” that it will “transform every aspect of Hollywood storytelling,” and could “change science itself” (all claims this newspaper has made in the past) It is said that artificial intelligence Will accelerate scientific discovery, automate white-collar jobs, and bring about wondrous innovations not yet imagined. Artificial intelligence is expected to increase efficiency and drive economic growth. It may also displace jobs and endanger jobs. It has surpassed human understanding of what it does.
Researchers are still getting a handle on what artificial intelligence can and cannot do. So far, it turns out that larger models trained on more data are more capable. This leads people to believe that continuing to add more content will lead to better artificial intelligence. The “scaling law” has been studied, showing how model size and the amount of training data interact to improve LLM. Is it someone who can answer questions correctly, or someone who can come up with creative ideas?
Predicting how well existing systems and processes will leverage AI can also be tricky. By far the power of artificial intelligence is most evident in discrete tasks. Give it an image of a rioting mob, and an AI model trained for the specific purpose can identify faces in the crowd for authorities. Give an LL.M.A. a law exam and it will do better than the average high school student. But performance on open-ended tasks is more difficult to assess.
Current large-scale artificial intelligence models are very good at generating things from poetry to realistic images based on patterns represented in training materials. But these models are not very good at deciding which of the things they produce make the most sense or are most appropriate in a given situation. They are not very good at logic and reasoning. It’s unclear whether more data will unlock the power of consistent inference, or whether an entirely different type of model will be required. It is likely that for a long time to come, the limits of artificial intelligence will require human reasoning to harness its power.
Figuring out these limitations is critical in fields like healthcare. If used correctly, artificial intelligence can detect cancer earlier, expand services, improve diagnosis and personalize treatment. According to a meta-analysis published in April in npj Digital Medicine, artificial intelligence algorithms can outperform human clinicians in such tasks. But their training can lead them astray, demonstrating the value of human intervention.
For example, artificial intelligence models can easily exacerbate human biases due to “data distribution changes”; if a diagnostic model is trained mainly on images of white skin and then given images of black skin, it may make mistakes. Doctors were able to increase the number of people who correctly diagnosed cancer from 81.1% to 86.1%, while also increasing the proportion of people who correctly told themselves that they did not have cancer. Relationships are considered superior to artificial intelligence and humans working alone.
robotic approach
Humanity may no longer need to explore new hypotheses in science. In 2009, Ross King of the University of Cambridge said his ultimate goal was to design a system that could serve as an autonomous laboratory or “robotic scientist.” Unlike graduate students and postdocs, Adam doesn’t need breaks to eat or sleep, but this type of AI system is (for now) limited to relative terms. Results.
Artificial intelligence technology has been used in science for decades to classify, filter and analyze data and make predictions. For example, researchers in the CETI project collected large datasets of whale vocalizations and then trained artificial intelligence models on this data to figure out which sounds might be meaningful. Or consider AlphaFold, a deep neural network developed by Google DeepMind. Trained on a large library of proteins, it can quickly and accurately predict the three-dimensional shape of proteins, a task that once required humans to perform days of careful experimentation and measurements. GNoME is another artificial intelligence system developed by DeepMind to assist in the discovery of new materials with specific chemical properties (see figure).
AI can also help make sense of massive data streams that would otherwise overwhelm researchers, whether sifting through results from particle colliders to identify new subatomic particles or keeping up with the scientific literature. It is impossible for anyone, no matter how critical a reader, to digest every scientific paper that might be relevant to their work. So-called literature-based discovery systems can analyze these mountains of text to uncover gaps in research, combine old ideas in novel ways, and even propose new hypotheses. However, it’s difficult to determine whether this type of AI work is beneficial. Artificial intelligence may be no better than humans at making unexpected deduction leaps. Instead, it may simply gravitate toward traditional, customary research paths that don’t lead to any exciting results.
In education, there are concerns that artificial intelligence (especially bots like ChatGPT) may actually hinder original thinking. A 2023 study by education company Chegg showed that 40% of students worldwide use artificial intelligence to complete their studies, mainly writing. This has led some teachers, professors and school districts to ban AI chatbots. Many people worry that their use interferes with the development of problem-solving and critical thinking skills because they are required to work through problems or engage in arguments. Other teachers are taking a completely different tack, using AI as a tool and incorporating it into assignments. For example, students might be asked to use ChatGPT to write an article about a certain topic and then criticize what is wrong with it.
Wait, was this story written by a chatbot?
In addition to generating text with the click of a button, today’s generative AI can generate images, audio, and video in seconds. This has the potential to change the landscape of the media industry, from podcasts to video games to advertising. Artificial intelligence-driven tools simplify editing, save time, and lower barriers to entry. But AI-generated content could put some artists, such as illustrators or voice actors, at risk. Over time, it may be possible to create entire movies using AI-powered simulacra of human actors (or fully artificial simulacra).
Still, AI models can neither create nor solve problems on their own (or at least not yet). They are just complex software with no sentience or autonomy. They rely on human users to call them and prompt them, and then apply or discard the results. The revolutionary capabilities of artificial intelligence, for better or worse, still depend on humans and human judgment.
© 2024, The Economist Newspapers Limited. all rights reserved. From The Economist, published with permission. Original content can be found at www.economist.com