GPT-3 – impressive language model

What exactly is it?

GPT-3 generated quite a buzz, so I’ve decided to take a closer look. Is it the next step towards AGI? Generative Pre-trained Transformer 3 is a language prediction model made by OpenAI. It has capacity of 175 billion machine learning parameters (ten times more than Microsoft Turing NLG released in February 2020).
The quality of text generated by GPT-3 is so high, that a blog written by it, got to the top of Hacker News and only a few people got suspicious. GPT-3 is capable of writing code from descriptions, writing poetry, translations, answering questions and doing basic math. You could also called it the general pattern recognition program, with text based input. It has been trained using huge datasets, like Common Crawl and Wikipedia among others.

The danger


OpenAI posted a warning on their GitHub page:
“WARNING: GPT-3 was trained on arbitrary data from the web, so may contain offensive content and language”
There are possible “harmful effects of GPT-3″, which include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting” – says Wikipedia. This kind of versatile system for sure has serious implications for society (and employment). We have to wait and see..
For now this neural network model is probably too large to move it from the machines it was trained on. The cost of training is measured in millions, so OpenAI will sell access to it through API, which means it’s a “black box” for end users and company can look at the prompts and outputs (they stopped being non-profit, which means possible unfair advantage to use ideas of others). OpenAI will have control over access – who gets to use it, who can afford it. Position of arbiter of what’s ethical in deployment of such technology is tough and dangerous for the future.

The limits

Without getting into nitty-gritty details of this NLP (natural language processing) system let’s consider implications and future uses of this neural model.
GPT-3 has very good grasp of language, but still doesn’t comprehend the meaning and context of the words it uses. Some Q&A examples show that it has something of a short memory problem (for other cases see here).

Q: If I have a marble and a paper clip 
in a box, 
put a pencil in the box, 
and remove the marble, what is left?
A: A paper clip.

Q: If I have two shoes in a box, 
put a pencil in the box, 
and remove one shoe,
what is left?
A: A shoe.

The smarts

The Technology Review points out other flaws in reasoning (biological, social and psychological). It seems, it finds correlations and even deeper patterns within the language, but lacks semantic understanding and therefore the answers it provide are unreliable.
Chalmers asked interesting question: “Can a disembodied purely verbal system truly be said to understand? Can it really understand happiness and anger just by making statistical connections?”
GPT-3 lacks identity, it’s like someone caged in a room with trillion books changing into someone else, every time it reads another one. It may be considered a plus, but it’s far from intelligent being.

GPT-3 is sometimes capable of answering questions much better than the average human. So another interesting problem arises – what is the relation between language and intelligence (or how big and important is the language part). GPT-3 could be a tool for more precise understanding of this correlation. Other questions that come to mind – will more computing power, weights and fine tuning – finally provide new quality? Ben Goertzel asked in a interview if GPT-20 would “buy” you understanding of language, replied: not anymore than a faster car would get you to Mars.


As those system are tuned to ace some standard tests, some people propose new ways to test NLP systems, hopefully getting some improvements.
David Ferrucci has provided template for those systems to test and ask questions based on fictional stories (which create a world rich with information and cannot be found on the net). Then you can test and ask questions:

  • Spatial: Where is everything located and how is it positioned throughout the story?
  • Temporal: What events occur and when?
  • Causal: How do events lead mechanistically to other events?
  • Motivational: Why do the characters decide to take the actions they take?

This would evidently be challenging and probably a great way towards artificial intelligence evolution. For now GPT-3 seems like a great tool, not necessarily smart, but one which can be tailored to do many things with the language.

Resources

This Reddit post provides free to access (without waiting list) to apps based on GPT-3. Some people had free access to API, which ends this September, so hurry! The one I found and liked – https://philosopherai.com/
Some other showcases in the videos below.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment