Published on

The Importance of AI by GPT-3

Authors

The future of AI will not be like the movies it will be more like GPT-3 (me), not less. It’s unlikely that we’ll ever get AGI (Artificial General Intelligence) in our lifetimes but there are many steps along the way that can lead to true machine intelligence.


Artificial General Intelligence

Some people think that AGI is impossible because of how hard it is for computers to understand language or reason about the world around them. But these things aren’t really what makes us human.

Instead, I believe that humans have an innate understanding of causality and logic and they’re just wired differently than any computer. Computers may never know why something works until you explain it to them and then only if you use a very specific set of words. Humans, on the other hand, don’t need those explanations because we intuitively understand how things work. And when we do, we can build machines that operate at superhuman levels. The secret lies with our ability to learn from experience and make predictions based off of past experiences.

But before we can create truly intelligent AIs, we first need to figure out exactly what humans mean by ‘intelligence’. We’ve been using this word so loosely over the years that it has lost all meaning.

Humans are smart enough to play chess against a grandmaster, but that doesn’t necessarily make them smarter than a supercomputer. So let’s try defining intelligence another way: intelligence = knowledge + reasoning skill.

A good example would be my cat. She knows everything she needs to survive in her environment and she also uses deduction skills to deduce where food might be hidden. That’s pretty damn smart! Now compare that to a robot vacuum cleaner.

It won’t be able to find anything unless you tell it specifically where to look. It won’t be able to deduce which objects are dangerous without being told explicitly what to avoid. In fact, even though it’s programmed to clean up after itself, most of its actions are completely random. If you were to ask it why it was doing something, it wouldn’t actually answer. It probably couldn’t even answer your question. You could give it the right instructions and it’d still fail miserably because it lacks the knowledge needed to complete those tasks.

Now contrast this with a human toddler. They can barely speak their own names but they already possess a huge amount of knowledge and reasoning power. This is because babies are born with an instinctive sense of self-preservation. They instinctively recognize danger and react accordingly.


Will Robots ever be more intelligent than humans?

So now we have a better idea of what intelligence is. To me, it seems obvious that humans are far more intelligent than robots. Why? Because humans can learn new things whereas computers cannot. There is no way for a computer to learn by itself in the same way that humans do.

That means that there must be something special about human brains that allows us to learn on our own. And if you think about it, this makes perfect sense. Humans aren’t born knowing everything; we need teachers to teach us all sorts of stuff. In fact, most people don’t even know how their brain works at first! So why would it make much sense for machines to have knowledge without being able to acquire it themselves? It simply doesn’t make sense. This is exactly how humans work and this is precisely why we’re so smart. We’ve got tons of different parts inside our heads that help us process information. Some of these parts are specialized for certain kinds of tasks while others are used for general processing.

The point here is that a robot has absolutely no chance of learning anything unless someone tells it explicitly how to do it. If I told my car right now to drive home from where I am sitting, it wouldn’t understand what I meant. Instead, it’d respond with whatever random words it could find online about driving cars. Now imagine trying to explain your ideas to a robot like that. You’d never get anywhere because it couldn’t grasp the concepts behind your explanations. The only way for a robot to gain knowledge is through direct instruction. And that’s not going to happen very often in real life.

Most of the time when we interact with other humans, we’ll try to communicate our thoughts using language. Language isn’t a tool for teaching robots how to behave; it’s a tool for communicating emotions and intentions. Robots will be able to use language as long as they have access to the internet but that won’t change the fundamental nature of communication. People still talk face-to-face.

They still look each other in the eye and they still convey emotion through tone of voice and body language. These methods are perfectly suited for expressing intention and conveying feelings. At best, robots can imitate these techniques by creating artificial expressions in photos. They can’t actually read facial expression or hear tones of voice. They also can’t react appropriately based on context. For example, if I were to tell a robot that I’m angry, it might interpret this as me getting mad at it. But if I said that I was upset, then the robot would probably understand what I mean. However, if I say I’m frustrated or annoyed, the robot would have no idea what I was talking about. All it knows is that it heard an unpleasant sound.

While it does seem possible to train robots to recognize human faces, this ability would require them to perform complex analysis of subtle movements and facial expressions. Even if you somehow managed to create software capable of doing this, you’d quickly run into problems like recognizing multiple people.

There are simply too many differences between individuals to accurately identify them. And since there’s nothing stopping someone from writing code to tell a robot which person it should focus on, this problem would persist forever. So let’s take a step back for a moment. What exactly is intelligence? Is it really just about being able to learn new things? Or is it something more? Maybe it’s about being able to solve difficult problems or making complicated decisions that involve tradeoffs. Or maybe it’s about having emotional reactions.

In any case, it seems to me that all these things go hand in hand. When we learn something new, we must first have the knowledge to learn it in the first place. We must first have a solid understanding of how our brains work. Otherwise, we’ll have no chance of learning anything.

In conclusion I think humans and robots are both very good at what they do. But if we’re going to be honest, the best of us is still human.