Leon Sterling | Professor In Software Engineering Computing and Information Systems at The University of Melbourne

The launch of Chat-GPT in November 2022 has launched a flurry of activity and discussion about artificial intelligence – its ethics, its practice, what it means for humanity. Yet it is not well defined, and people often credit it with things that it is not. As an artificial intelligence researcher for over 40 years, I have seen interest wax and wane. But the term is still something I cannot easily define. This blog post discusses definitions of artificial intelligence developed for teaching in the 1990s.


One of the early definitions of artificial intelligence is attributed to Marvin Minsky, one of the pioneers of the field. According to Minsky, Artificial Intelligence is programming computers to solve tasks that would require intelligence for people to solve.


Here are three observations that follow from the definition.

Observation 1: Artificial intelligence necessarily involves computers, and indeed the field is inherently part of computer science, though other fields have rushed to adopt it. Given that the problems the computer is intended to solve can come from any field, artificial intelligence is interdisciplinary.

Observation 2: Artificial intelligence necessarily involves building programs which suggests there is a practical component to the field.

Observation 3: People’s behaviour is the arbiter of appropriate study for artificial intelligence. We believe that intelligence is a useful description of people. Distilling that experience may make computers more useful.

EZ-Robot – Powerful Humanoid Robot that Engages All Students

Easy enough for primary school… powerful enough for universities and real world applications. Advanced users have made life-sized humanoid robots, submarine robots, snow-shovelling robots and more.


There is of course a potential difficulty in the definition. In order to understand what artificial intelligence is, one needs to understand what intelligence is. Defining intelligence is not an easy task. While a few people might say that intelligence is something objective, measured by IQ tests, that view is largely discredited.


A variant of the definition appeared in one of the standard artificial intelligence textbooks of the 1990s by Rich and Knight. Artificial Intelligence is the study of how to make computers do things which, at the moment, people do better. That definition is one that has effectively been followed and has led to champion game playing programs, computer art, and predictive text as delivered by Chat-GPT.


An unfortunate consequence of the definition is that the domain of artificial intelligence becomes a moving target. Once a problem has been solved, it is no longer the domain of artificial intelligence. That has happened many times, starting from symbolic integration learned in calculus. Symbolic integration was a focus for artificial intelligence in the 1960s, but once an algorithm was developed, it stopped being an artificial intelligence topic.


The definition I adopted for teaching 25 years ago is the following: Artificial intelligence is an interdisciplinary attempt to build machines that mimic intelligent human behaviour. This definition gives artificial intelligence broad scope and makes explicit an interdisciplinary focus and the need for building and testing.

Want to know more about the Go1 robot?

Go1 is a powerful, lightweight quadruped robot with an intelligent AI system. Designed for universities, research, and industry, Go1 is both accessible and expandable.


There are two features that are still valid today. The first is the emphasis on intelligent behaviour rather than intelligence. In my experience intelligent is better as an adjective than a noun. An intelligence test measures how different people with a similar background do on a comparable test. It is comparative, not absolute.


The second feature is the stress on mimicking human behaviour. While some researchers use programs as models of human minds, the truth is we are wired differently from computers – literally. Digital assistants such as Siri and Alexa have a language model that can generate an appropriate answer, but do not understand in the way that a close friend or family member does. It does not matter that it is different for most interactions. However it is useful to keep the perspective that machines are not people.

About The Author

Leon Sterling

Professor In Software Engineering Computing and Information Systems at The University of Melbourne

Professor Leon Sterling is a career academic with a distinguished academic record. After completing a PhD at the Australian National University, he worked for 15 years at universities in the UK, Israel and the United States.

He is an academic based in Melbourne, Australia with a 40+ year career working across the fields of artificial intelligence, ICT and design.

His current research focuses on incorporating emotions in technology development, where motivational models are an essential element.