How close we are to AGI?
Exploring Differing Views on Artificial General Intelligence and Its Timelines
Introduction
Artificial General Intelligence (AGI) has emerged as a captivating concept for both the scientific community and the general public. AGI represents the development of highly autonomous systems capable of replicating human-level intelligence across various domains and tasks. Unlike specialized AI systems, AGI aims to exhibit cognitive flexibility and adaptability, enabling it to handle complex problems without predefined limitations. In this article, we explore different perspectives on the timeline for achieving AGI, ranging from optimistic expectations to cautious skepticism, while also delving into the ethical considerations surrounding its development.
Overview of Different Perspectives
Experts hold varying views on the proximity of AGI realization. Some anticipate its imminent arrival, fueled by the exponential growth of technology. They envision AGI surpassing human intelligence and arriving within the next decade. Conversely, skeptics argue that AGI is still distant, possibly decades away, if attainable at all. They advocate focusing on narrower AI systems to solve specific problems. A more moderate standpoint acknowledges progress toward AGI but hesitates to predict its timeline precisely. At the same time, they emphasize the importance of addressing ethical concerns regardless of its achievement, especially if progress toward AGI is fast.
Points of View
AGI will be achieved within the next decade
AGI is a Distant and Uncertain Future
Uncertain AGI timelines and Ethical Considerations
AGI will be achieved within the next decade
Experts who believe that we will soon be able to achieve AGI foresee it being created by 2030. They highlight the incremental capture of intelligence that has already been made through current efforts and anticipate even greater advancements with minimal additional effort. Notably, transformer models like OpenAI GPT-4 and Google Bard showcase capabilities in complex tasks, approaching the human level of performance in many of them. As some of the models can now cover different domains like text, speech, and vision, they can further perform more complex activities. New AutoGPT models also allow users to perform complex tasks in which AI first divides a problem into a set of tasks and then makes requests to specific models and APIs to perform them. This opens further opportunities to perform complex tasks. And with the growth of available data, the amount of compute used for training, and the enhancement of model architectures, some experts believe we can soon achieve AGI.
At the same time, some experts argue that current AI models have many limitations, which makes them not proper candidates for AGI. Specifically, the transformer models are just trained to predict the next word in the text and cannot reason. They are good at imitating human speech but do not understand it. When the model talks about something, it does not understand the concept of what it talks about. Hence, the experts believe that new architectures will be required to achieve AGI. There are many researchers working on finding such architectures, and researchers approach the problem from different angles. Some try to simulate the work of the human brain. Others try to create a general multimodal model that first learns to make sense of the information and then uses this knowledge to perform any tasks. Thirdly, some try to create a general evolutionary learning algorithm that can be sent into the real world to learn from its own experience. Finally, some researchers try to combine several approaches to create a hybrid model. It is an open question which of the methods will show the most promising results and how long it will take before one of those approaches starts to show promising results.
AGI is a Distant and Uncertain Future
Experts who do not believe we will achieve AGI any time soon highlight significant challenges in achieving AGI. First of all, they emphasize the significant distance between solving specific narrow problems and having general intelligence. In this case, we need to distinguish progress in weak AI used to solve a specific problem and AGI, which can solve a variety of problems. For example, the solutions we currently have for the weak AI self-driving car problem cannot be easily transferred to other areas. To make it happen, the model will need to be trained on a new specific dataset, which means that in order to cover a wide range of activities that humans can perform, many models will need to be created.
Additionally, AGI requires a level of flexibility, autonomy, and reasoning that exceeds current AI technology. Returning to the example of self-driving cars, the models are only trained for specific situations and break when they encounter something they have not seen before. Unlike humans, they cannot solve those problems based on their general experience. As explained earlier, current multitask algorithms cannot reason and understand objects and concepts behind the data. The distance between emulating human intelligence and actually reasoning, some experts say, is much bigger than we expect. Hence, the creation of a truly general-purpose AI system remains elusive.
Another limitation is hardware. Some experts argue that the hardware required to process the amount of data needed to train an AGI far exceeds the hardware that we currently have.
Considering the limitations listed above, some experts suggest focusing on addressing the limitations of existing AI systems rather than pursuing AGI directly. As they see significant challenges in solving those problems, they do not expect AGI to arrive any time soon.
Uncertain AGI timelines and Ethical Considerations
Another school of thought acknowledges the uncertainty surrounding the timeline for achieving AGI, considering the limitations of current algorithms and the challenges we face in developing new ones. While progress is evident, the complexity of AGI and the need for advancements across various AI disciplines make accurate predictions difficult. Hence, we cannot provide even approximate timelines for that.
Nonetheless, they emphasize the importance of taking AGI seriously and proactively addressing its ethical implications. Concerns regarding job displacement, unintended consequences, and the impact on society require careful consideration to ensure AGI's safe and responsible development. Some prominent experts, including Andrew Yang, Steve Wozniak, and Elon Musk, have signed a letter to stop giant AI experiments to ensure that we are prepared for the changes that AI brings to society. Others, including OpenAI CEO Sam Altman, call for AI regulation to ensure the ethical use of the technology and underlying data. Some also anticipate the potential job displacement effect and advocate for universal basic income to mitigate its negative impact. Furthermore, they foresee an AI arms race between countries and the potential access of adverse actors to the technology, which could be used in a negative way.
I think as of now it is not that we are getting close to AGI, but we get a new platform switch like we had with internet and mobile previously. A lot of activities now can be changed with new AI which can cause significant societal changes.
My personal view, we are still quite far from AGI. I understand that the latest models like ChatGPT looked like a huge leap forward for many people. But we need to acknowledge that they are just good at imitating human speech. They cannot as of now evolve further themselves.