Skip to main content

Verified by Psychology Today

Intelligence

Towards Artificial General Intelligence

Approaches of functional contextualism may be key.

Key points

  • Recent novel approaches of functional contextualism may be the key to solving the problem of Artificial General Intelligence (AGI).
  • Functional contextualism differentiates from traditional cognitive approaches.
  • It will be important to consider ethical implications around AGI in the future.
 vegefox/Adobe stock
Source: vegefox/Adobe stock

References to artificial intelligence (AI) beings have appeared throughout time since antiquity [1]. Indeed, it was the study of formal reasoning, with philosophers and mathematicians at this time who started this inquiry. Then, much later, in more recent times it was the study of mathematical logic which led computer scientist Alan Turing to develop his theory of computation.

Alan Turning is perhaps most notably known for his role in developing the 'universal' computer called the Bombe at Bletchley Park, which decrypted the Nazi enigma machine messages during World War II. However, it was perhaps his (and Alonzo Church’s) Church-Turing thesis which suggested that digital computers could simulate any process of formal reasoning, which is most influential in the field of AI today.

Such work led to much initial excitement, with a workshop at Dartmouth College being held in the summer of 1956 with many of the most influential computer science academics at the time, such as Marvin Minsky, John McCarthy, Herbert Simon, and Claude Shannon, which led to the founding of artificial intelligence as a field. They were confident that the problem would be solved soon, with Herbert Simon saying, “machines will be capable, within twenty years of doing any work a man can do.” Marvin Minsky agreed, suggesting, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved" [2]. However, this has not been the case, and the problem proved far more difficult than they imagined, leading to a loss of enthusiasm when ideas ran out which brought about what is known as the AI winter (a lack of interest) arriving in the 1970s.

However, there has most recently been a revival of AI interest and approaches, such as the revival of deep learning algorithms in 2012, by George E. Dahl who won the "Merck Molecular Activity Challenge" using multi-task deep neural networks to predict the biomolecular target of a drug [3], and the development of deep reinforcement learning (Q-learning) algorithms in 2014 [4].

Some of the most impressive displays of AI today, have exploited these new approaches, in the form of deep learning combined with reinforcement learning algorithms, such as in the example of Deep Mind’s Alpha Go [5] which managed to beat the leading player (Lee Sedol) in the game of Go, which was previously thought impossible (since this game can not be won by AI brute force approaches given the complexity of the game — that is 10 with another 360 zeros after it, possible moves, on a 19 x 19 board). There have also been impressive natural language emulations in the most recent natural language processing (NLP) AI in the form of Open AI’s GPT-3 [6] which utilizes a deep learning method such as the use of an extremely large transformer based neural network (with 175 billion parameters), and which is specialized for text classification allowing it to predict and create natural sounding text.

However, though these approaches have shown some very impressive results, they still do not demonstrate any ability to capture general knowledge in the way which was anticipated at Dartmouth College in 1956. For example, GPT-3 scrapes the internet (such as Twitter and Wikipedia) via an application programming interface (API) and then simply learns what is the most likely next word in a sentence given the corpus of text it has learned from. This is essentially pattern recognition, and without the ability to organize semantic knowledge of any of the concepts it uses when it creates text. This essentially means it can emulate text but can not 'think' for itself.

Alan Turning, in the 1950s, developed something called the Turing Test, which is a test whereby a computer AI uses written communication to try to fool a human interrogator into thinking that it's another person. If it does, it is said that it has passed this test and possesses human-level general intelligence.

AI has not yet passed this test. One of the potential problems is that the pattern recognition system approaches they employ are overly simplistic and do not capture the rich contextual environmental conditions in which concepts are based and understood. Simple semantic logic systems based on cognitive science have also been a poor substitute for general knowledge and intelligence. This is because these approaches have no means to capture complex relational patterns between concepts and the environment in which there is evidence that human learning uses and embeds within relational learning networks [7].

Of course, there is no way a machine can feel and experience thoughts like a human, but it can compute and relate concepts, and encode human-like experience (e.g., a snake is dangerous and scary, therefore it must be avoided). So, what might be the solution in developing such relational networks, which could bring about a general form of AI called artificial general intelligence (AGI), which could 'think' like a human and in the way which was proposed in Dartmouth College in 1956? Simply more parameters in a neural network?

Recent work conducted in my own lab [7] with colleagues in Belgium has suggested that a new approach of functional contextualism (which differs from current forms of cognitivism — e.g., of memory, attention, and reasoning through logic) may be the solution to progress AI into the generalized form of AGI, where the system learns and understands concepts and how these relate to other concepts (through something called relational frames), and the context in which cues within the environment influence functions and the meaning or uses of such concepts. For example, the function of a chair is to sit on in the context of a classroom, and maybe very different in another context, such as an art exhibition in the context of when it is broken — i.e., it is the environmental context which defines the function of the concept at any one point in time, and not some predefined definition one may have stored in memory.

This functional contextual approach allows for concepts to be understood through a relational network, for instance an equivalence class can be established within this relational network, whereby, for example, knife and fork are contained within the equivalence class (or category) of cutlery. This network, therefore allows you to understand and form categories of concepts. Other concepts can be related through distinction, opposition, coordination etc., allowing you you infinitely increase your understanding about the world around you. This approach suggests that these arbitrary relations (as opposed to relations solely based on similarity of size, colour etc.) are key to the knowledge formation which is central to developing AGI. This therefore, crucially differentiates the approach from many of the cognitive mechanism approaches currently being explored through attention, memory etc.

Crucially, this approach of functional contextualism is thought to provide a broader contextual explanation of how concepts emerge and relate to one another, and may provide the best possible means to develop AGI.

Finally, once AGI does emerge (and it will eventually) perhaps the biggest efforts should then be to ensure they are created ethically. This differs from what has been imagined in Stanley Kubrick’s ‘2001: A Space Odyssey,’ which imagined an AI system called HAL that attempts to kill the astronauts as they try to shut it off. Perhaps the greatest chances of producing ethical AI would mean that the AGI would be able to derive relations of empathy towards others, and the functional contextual approach allows for these relations to emerge (called perspective-taking relations) as one relationally frames themselves (‘I’) in the context of the perspective of the other ('YOU'). Therefore, this functional contextual approach will also likely bring about more ethically orientated AI agents. These are both exciting and thought-provoking times.

References

[1] McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd

[2] Minsky, M. L. (1967). Computation. Englewood Cliffs: Prentice-Hall.

[4] Sutton, R. S., and Barto, A. G. (2018). Reinforcement Learning: an Introduction. Cambridge, MA: MIT press.

[5] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature 550, 354–359

[6] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., et al. (2020). Language models are few-shot learners. arXiv.

[7] Edwards DJ, McEnteggart C and Barnes-Holmes Y (2022) A Functional Contextual Account of Background Knowledge in Categorization: Implications for Artificial General Intelligence and Cognitive Accounts of General Knowledge. Front. Psychol. 13

advertisement
More from Darren J. Edwards Ph.D.
More from Psychology Today