The difference between weak AI and strong AI

Hint, the latter doesn’t exist—yet.

Amanda Fetter
VoiceHQ

--

We’ve been getting it wrong this whole time — what we’ve been calling Artificial Intelligence (AI) is actually just a really weak version of what AI promises to be.

DALL-E rendering of a robot using a laptop.

ChatGPT, DALL-E, Stable Diffusion, Midjourney, and the like are all examples of “narrow” or “weak” AI. They have each been trained to perform specific tasks related to natural language processing that would typically require human intelligence. These tasks range from understanding human languages, to recognizing images, and making decisions.

So what is holding these programs back from being considered “strong AI”?

To get to the bottom of this question, we first need to understand what “strong AI” actually means. Strong AI, also known as ‘general AI,’ refers to a machine that can perform absolutely any intellectual task that a human can do without human help. This requires being able to perform tasks that require common sense, creativity, or consciousness apart from human intervention. This type of Artificial Intelligence simply does not exist, or hasn’t been shared with the public.

Even though DALL-E and Stable Diffusion and Midjourney can generate artworks, they can’t do so without creative direction from a human being. Additionally, they get their knowledge from a dataset of images and captions compiled by a human being. These ‘weak AI’ models rely on human directives every step of the way. They aren’t able to independently perceive the world around them and come to their own conclusions.

Strong AI would be able to learn and understand any intellectual task a human can. It would be able to reason and learn from experience, perceive the world, and be fully conscious on its own. Such a machine does not exist yet, but creating one is the goal of many AI researchers — a scary prospect? Or an exciting one? That’s an article for another day.

One limitation that both weak and strong AI will forever fall victim to: human bias.

All AI models have to be trained by a dataset. All datasets are made and compiled by human beings. Human beings are full of biases that are fed to them from the moment they are born, it is a natural part of human understanding and cognitive development. Problematically, biases can be both implicit and explicit. This means that we can go about life making decisions that are influenced by biases that we don’t even realize that we have. Because it is impossible to know and deconstruct every single bias that we as individual human beings might hold, it is also impossible to create an AI model that exists without bias.

Whether or not we get to a point where “Strong AI” exists, it will always be subject to the limitations of human subjectivity.

This article was written in collaboration with ChatGPT, which provided definitions for ‘strong’ AI and ‘weak’ AI.

--

--