Untitled (1920 x 924 px) (16)

You Don’t Need a Fancy Degree to Understand Modern AI

With everything going on in the world of AI computing lately, and people panicking about AI, it’s important that people understand the basics.

(Note: I will likely update this post from time to time depending on questions I get.)

We fear what we don’t understand. The best way to overcome that is with a little knowledge.

And no, it doesn’t mean we should we should willy-nilly accept this (or any) technology into our lives. Like everything, it has its caveats. It has things it’s good at, and things it’s bad at.

A car is a mode of transportation. That doesn’t mean it works will in all situations. In fact, if you use it in the wrong situation, it could get you killed. i.e., try driving it into the ocean to get to another continent…

So as to with AI, we have to understand it and its limitations.

Start here, with some common terminology that’s popping up all over the place:

Artificial Intelligence
First, let’s talk about the term “AI.” AI stands for “artificial intelligence.” IMHO, that’s 80% of the problem right there. It’s a misnomer. We gave the domain the name “intelligence” before proving that we know how to prove something is in fact intelligence. And what kind of intelligence, anyway? We all know now that human intelligence comes in different forms… so why this one blob of a term to describe anything that happens in computing that reeks of mimicking the human brain? But I digress. I’m not going to change that, but I would if I could. (If I could, I think I’d call it something like ‘computational mimicry.’ Spoiler alert: one of the characters in The Robot Galaxy Series also takes issue with the term ‘artificial intelligence,’ but you have to read through to Book 3 to get there.)

LLM or Large Langauge Model
This is a type of AI algorithm. It create one, you start with a lot of text. (And when I say a lot, I mean, A LOT, like the entirely of Wikipedia a lot) Then you “train” it with other techniques that are part of the “deep learning” suite of methods in AI. Once the model exists, you can do a variety of things with it like translate text, answer questions, summarize text and more.
The simplest way to describe the training is that it’s looking for connections and patterns of words and phrases.

“GPT” (Generative Pre-trained Transformer) is one kind of LLM.

Generative AI
This is a very broad term. It’s used to describe any system or tool that can create new text, code, images, video, etc. ChatGPT is a generative AI system. Many tools that offer to autocomplete or rephrase your text are also generative AI systems.

Natural Language Processing (NLP)
NLP is one of the subfields of AI in Computer Science that specifically deals with how computer programs process and analyze human language. Even though a lot of people are hearing this term for the first time lately, it’s been around for decades.

Token
I’ve included this because many articles on LLMs inevitably mention things like “training on a X million tokens” or something like that. A token is a piece of text. It can be a word, subword, character, or even punctuation marks. One of the first things people learn to do when they’re getting involved with NLP is to learn to “tokenize” a body of text.

Transformer
Here, we don’t mean the electrical type, but I think the term came about because it is an analogy to that. In NLP, you’ll see the term “transformer” and “transformer model” used interchangeably. It’s based on neural networks (explained below) and it’s essentially the algorithm that takes your input and produces your output.

Neural network (NN)
Sometimes you’ll see the term “ANN” which stands for “Artificial Neural Network.” It’s another algorithm type, one that is inspired by biological neural networks in our brains. It’s a way to teach computers to process information, and usually falls under the heading of “machine learning” along with many other ML algorithms.

And for those of you who are true computer novices, here are a few others:

GPU: Graphical Processing Unit
Hopefully, you’ve at least heard of “CPU” (Central Processing Unit). A GPU is the same thing conceptually, but faster. The “G” stands for “Graphics” and GPUs were initially developed to speed up graphics rendering. But the key is that a GPU can process data faster than a CPU.

  • Why it comes up frequently when we talk about AI: Generative AI, and those LLMs we talked about above are computationally intensive. Many will not run on your typical PC desktop, and require not just a since GPU, but stacks of them!

API: Application Programming Interface
Let’s simplify for a second and just call it an “interface” and mention that an “interface” is exactly what it sounds like: how you use a computer or computer program. Add in the “Application Programming” part and now we’re talking about a specific way to interface with a computer program and is usually used to describe how one computer program interacts with another. The API usually specifies what and how Program A supplies data to Program B and vice versa.

  • Why it comes up frequently when we talk about AI: Many of the web-based tools that are popping up relay on an API to a LLM system or even to ChatGPT. It’s important to know, because it’s important to understand that a lot of tools will be sending your data to another tool via that other tool’s API.

You might also want to read the other blog posts I’ve written that relate to AI: https://adeenamignogna.com/tag/ai/

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

X