If you can’t differentiate between AGI and RAG, don’t fret! We’re here to assist you.
Artificial intelligence is the current buzz in the tech world, with companies frequently boasting about their advancements in AI. However, the field is laden with jargon, making it challenging to grasp the specifics of each new development.
To help you navigate this complex terrain, we’ve compiled a list of common AI terms and their meanings, explaining why they’re significant.
What is AI?
Artificial intelligence (AI) is a branch of computer science focused on creating systems that can think like humans. Currently, AI is often discussed as both a technology and an entity, though its definition is often used as a marketing term, making it somewhat ambiguous.
For instance, Google has long invested in AI, enhancing its products with AI tools like Gemini, which appear intelligent. AI models, such as OpenAI’s GPT, underlie many AI tools. Meta’s CEO, Mark Zuckerberg, even refers to individual chatbots as AI.
As more companies market AI as the next big innovation, the terminology can become even more confusing. To clarify, here’s an overview of key AI terms:
Basic AI Concepts
Machine Learning: Systems trained on data to make predictions about new information, enabling them to “learn.” It’s a subset of AI crucial to many AI technologies.
Artificial General Intelligence (AGI): AI as intelligent as, or more intelligent than, humans. OpenAI is heavily investing in AGI, which could be immensely powerful yet potentially frightening, reminiscent of movies about superintelligent machines.
Generative AI: AI technology that can create new text, images, code, etc. For example, ChatGPT or Google’s Gemini generate interesting, though sometimes problematic, responses and images. These tools rely on AI models trained on vast datasets.
Hallucinations: Generative AI tools can “hallucinate” by producing what they think are the best responses, sometimes resulting in factual errors or nonsense. There’s debate on whether AI hallucinations can ever be fully corrected.
Bias: AI tools can exhibit biases based on their training data. For example, research by Joy Buolamwini and Timnit Gebru showed that facial recognition software had higher error rates in identifying darker-skinned women’s gender.
AI Models
AI Model: Trained on data to perform tasks or make decisions independently.
Large Language Models (LLMs): AI models that process and generate natural language text. Anthropic’s Claude is an example, designed to be a helpful, honest, and harmless assistant.
Diffusion Models: Generate images from text prompts by adding and then removing noise from an image during training. Some also work with audio and video.
Foundation Models: Trained on extensive data and serve as a foundation for various applications without specific training. Examples include OpenAI’s GPT, Google’s Gemini, Meta’s Llama, and Anthropic’s Claude. These models are often marketed as multimodal, processing text, images, and video.
Frontier Models: A marketing term for unreleased, potentially more powerful AI models in development. These models could pose significant risks despite their advanced capabilities.
Training AI Models
AI models are trained by analyzing datasets to recognize patterns and make predictions. For example, LLMs are trained by reading large amounts of text, enabling them to understand queries and generate human-like responses. Training requires significant resources and computing power, often using powerful GPUs.
Parameters: Variables an AI model learns during training, determining how inputs are converted to outputs. Companies may highlight the number of parameters to showcase a model’s complexity.
Additional AI Terms
Natural Language Processing (NLP): Enables machines to understand human language via machine learning. OpenAI’s ChatGPT and Whisper speech recognition technology are examples.
Inference: The generation of responses or content by a generative AI application, such as ChatGPT providing a cookie recipe.
Tokens: Chunks of text (words, parts of words, or characters) used by LLMs to analyze and generate responses. The more tokens a model can process simultaneously (context window), the more sophisticated the results.
Neural Network: Computer architecture mimicking the human brain’s neurons to process data. Critical for generative AI systems, neural networks can learn complex patterns without explicit programming.
Transformer: A neural network type using an “attention” mechanism to understand relationships within a sequence. Transformers are powerful and can be trained faster than other neural networks. The T in ChatGPT stands for transformer.
RAG (Retrieval-Augmented Generation): Enhances AI models by incorporating external data to improve the accuracy of generated content. If an AI chatbot doesn’t know an answer, RAG allows it to check external sources and inform its response.
AI Hardware
Nvidia’s H100 Chip: A popular GPU for AI training, highly sought after for its AI workload capabilities. While Nvidia leads the market, other companies are developing their own AI chips, potentially challenging Nvidia’s dominance.
Neural Processing Units (NPUs): Specialized processors in devices that perform AI inference more efficiently than CPUs or GPUs.
TOPS (Trillion Operations Per Second): A metric used by tech vendors to showcase their chips’ AI inference capabilities.
Leading AI Companies and Tools
OpenAI / ChatGPT: ChatGPT’s launch in late 2022 spurred widespread AI interest, prompting other tech companies to enhance their AI offerings.
Microsoft / Copilot: Microsoft integrates Copilot, powered by OpenAI’s GPT models, into various products and holds a 49% stake in OpenAI.
Google / Gemini: Google is enhancing its products with Gemini, encompassing its AI assistant and various AI models.
Meta / Llama: Meta’s AI efforts center around its open-source Llama model.
Apple / Apple Intelligence: Apple introduces AI-focused features under Apple Intelligence, including ChatGPT integration with Siri.
Anthropic / Claude: Founded by former OpenAI employees, Anthropic develops the Claude AI models, with significant investments from Amazon and Google.
xAI / Grok: Elon Musk’s AI company, developing the Grok LLM, recently raised $6 billion.
Perplexity: Known for its AI-powered search engine, Perplexity faces scrutiny for its data-scraping practices.
Hugging Face: A platform providing a directory of AI models and datasets.