# AI Terms You Must Know for AWS Certified AI Practitioner (AIF-C01): From “Model” to “LLM” in Plain English > If AI vocabulary feels like a wall of buzzwords, this guide breaks down the exact terms AIF-C01 expects—so you can recognize them in questions and use them correctly. - Author: Jamie Wright (Founder at Upcert.io) - Date: 2026-01-15 - Reading time: 10 min - Tags: Terminology, AWS, AIF-C01 - Category: Domain 1: Fundamentals of AI and ML --- # AI Terms You Must Know for AWS Certified AI Practitioner (AIF-C01): From “Model” to “LLM” in Plain English *If AI vocabulary feels like a wall of buzzwords, this guide breaks down the exact terms AIF-C01 expects—so you can recognize them in questions and use them correctly in real AWS scenarios.* ## Why These AI Terms Matter (For the AIF-C01 Exam and On-the-Job Decisions) Ever notice how AWS exam questions rarely ask, “Define AI,” and instead ask something like: “Which step is happening here—training or inference?” That’s not an accident. AIF-C01 is big on vocabulary-in-context. The tricky part isn’t memorizing a glossary—it’s recognizing what a scenario is *actually doing* so you can pick the right answer (and, later, the right AWS service). For example: “We collected labeled historical data and ran a job overnight to update predictions.” That’s training. “A customer uploads an image and we return labels in 300ms.” That’s inference. Same deal with “algorithm” vs “model.” An algorithm is the learning method you choose (like the cooking technique). A model is what you get *after* training (the finished dish you can serve). And then there’s the classic umbrella confusion: AI vs ML vs deep learning. In real life people say “AI” for everything. On the exam, those words can narrow down what’s happening under the hood—and which approach is appropriate. Why you should care beyond the test: these terms show up in architecture decisions, risk reviews, and even cost conversations. If you can say, “We’re doing inference on a pretrained model” versus “We need to train a custom model,” you’ll sound like someone who knows what they’re building (because you do). ## Core Definitions in Plain Language: AI vs ML vs Deep Learning (and Where Neural Networks Fit) AI vocabulary gets way easier when you picture a set of nesting dolls. **Artificial Intelligence (AI)** is the biggest doll. It means “getting computers to do things that feel smart”—like understanding language, spotting patterns, planning actions, or generating content. It’s the goal, not one single technique. **Machine Learning (ML)** is a smaller doll inside AI. Instead of hand-writing rules (“if the email contains ‘free money,’ mark spam”), you feed the computer examples and let it learn patterns. Think of it like training a new teammate: you don’t give them a 400-page rulebook—you show them real cases until they start making good calls on their own. A nice plain-English definition from AWS documentation puts it like this: computers learn from examples rather than following strict rules. [Voice conversation prompts - Amazon Nova (ML definition)](https://docs.aws.amazon.com/nova/latest/nova2-userguide/sonic-system-prompts.html) **Deep learning** is a smaller doll inside ML. Deep learning uses **neural networks** with many layers (that “deep” part) to learn especially complex patterns—like recognizing objects in images, understanding speech, or generating fluent text. So where do **neural networks** fit? A neural network is a model structure inspired (loosely) by how brains pass signals. It’s made of “neurons” (math functions) connected by “weights” (numbers the training process adjusts). The network starts clueless, makes guesses, gets corrected, and gradually tunes those weights. A practical way to map this to the exam: - If a question says “rule-based system” or “expert system,” that can be AI without ML. - If it says “trained on historical examples,” that’s ML. - If it hints at image/audio/text generation or “multiple layers,” you’re often in deep learning / neural network territory. One more helpful mental model: **AI is the destination, ML is a route, and deep learning is a fast highway you take when the problem is messy and pattern-heavy.** Not always required—but when it fits, it’s powerful. ## Model, Algorithm, Training, Inference, and Fit: The Words Behind Every ML Workflow If AI were a restaurant, most people only see the plated meal. The exam wants you to understand the kitchen. ### Algorithm vs. model (recipe vs. trained result) An **algorithm** is the learning approach—the “recipe.” Examples (conceptually) include things like decision trees or neural networks. A **model** is the trained artifact you save and use. It’s the output of applying an algorithm to data. In AWS terms, it’s the thing you deploy to generate predictions. ### Training vs. inference (learning vs. using) **Training** is when the model learns patterns from data. This is typically heavier: more compute, more time, and lots of iteration. **Inference** is when you use the trained model to make a prediction on new data. This is the “serve it to customers” phase—often optimized for speed and cost. A real scenario: you train a churn prediction model monthly using your historical customer data. But you run inference every time a support ticket comes in, because you want a fast “churn risk” score right now. ### Fit (and the overfitting/underfitting trap) **Fit** is basically: “How well does the model match the real pattern in the data?” - **Underfitting** is like a model that’s too simple. It misses obvious patterns (low performance even on training data). - **Overfitting** is like memorizing practice questions instead of learning the concept. It looks great on training data but falls apart on new data. On AIF-C01, “fit” shows up indirectly. Watch for phrasing like “performs well in training but poorly in production” (overfitting) or “performs poorly everywhere” (underfitting). ### Bonus terms that show up everywhere - **Features**: the input signals you feed the model (age, clicks, temperature, etc.). - **Labels**: the correct answers in supervised learning (“spam” vs “not spam”). If you can keep these workflow words straight, a lot of exam questions suddenly feel… fair. ## LLMs, Foundation Models, and Prompt Engineering (What Generative AI Terms Actually Mean) Generative AI terms can feel mystical—like the model is “thinking.” It’s not. It’s pattern completion at scale, and the vocab is just describing *how you use it*. ### Large Language Model (LLM) An **LLM** is a type of model trained on huge amounts of text so it can generate and transform language. It predicts “what text should come next” so well that it can write emails, summarize documents, answer questions, and draft code. ### Foundation model A **foundation model** is a broad, general-purpose model trained on lots of diverse data (often text, and sometimes images or other modalities). You don’t start from scratch—you start from a strong “base model” and then adapt it. Here’s the exam-relevant idea: foundation models are meant to be reused across many tasks. That’s why AWS questions often frame the decision as “choose the right model and the right way to customize it,” not “train a brand-new model from zero.” ### Prompt engineering (steering without retraining) **Prompt engineering** is how you shape the input to get better outputs—without changing the model’s weights. Think of the model like a super capable intern. If you say, “Write something about security,” you’ll get a generic answer. If you say, “Write a 6-bullet risk assessment for an S3 data lake, in the voice of a cautious auditor, and include mitigations,” you’ll get something you can actually use. Practical prompt tricks you’ll see in AIF-C01-style scenarios: - Give the model a role (“You are a compliance analyst…”). - Provide context (paste the policy snippet it must follow). - Specify format (JSON, bullets, table, max length). - Add examples (a good output template beats vague instructions). The big gotcha: *prompting is not training.* If the question says “update the model’s behavior permanently,” prompting alone isn’t the right tool. ## Computer Vision and NLP: Recognizing the Task (and the AWS Service That Matches It) A lot of exam questions become easy once you ask one simple thing: “Is the input mostly images/video, or mostly text?” That one decision often points straight to the right family of services. ### Computer vision (meaning from pixels) **Computer vision** is the practice of extracting meaning from images or video—identifying objects, scenes, activities, or faces. On AWS, **Amazon Rekognition** is the go-to example: it’s a computer vision service built on scalable deep learning tech. In exam terms, think “I have images/videos and I want labels, face detection, moderation, or similar visual analysis.” [What is Amazon Rekognition?](https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html) Real-world scenario: you’re building a photo upload feature and want to automatically tag “dog,” “beach,” or “car,” or detect whether a face is present before letting someone set a profile picture. ### Natural Language Processing (NLP) (meaning from words) **NLP** is the practice of extracting meaning from text—sentiment, key phrases, entities (people/places/orgs), language detection, summarization, and more. On AWS, **Amazon Comprehend** is a classic NLP service: it uses NLP to extract insights from documents. In exam scenarios, think “I have text and I want to understand what it says at scale.” [What is Amazon Comprehend?](https://docs.aws.amazon.com/comprehend/latest/dg/what-is.html) Real-world scenario: you pipe thousands of support tickets into an NLP workflow to detect common complaint themes, route urgent issues, and measure sentiment trends week over week. ### The quick exam heuristic If the question talks about frames, cameras, labels, or faces → computer vision. If it talks about emails, chats, tickets, documents, or sentiment → NLP. Start there, then pick the service and workflow that matches. ## What You Need to Know: Bias, Fairness, and Exam Tips (Common Mistakes to Avoid) AI questions aren’t only about accuracy. The exam (and real projects) also cares about whether the system behaves responsibly. ### Bias (skewed outcomes) and fairness (your goal) **Bias** is when a model produces systematically skewed results—often because of the data it learned from, the way the problem was framed, or how performance was measured. If your training data under-represents a group, the model can underperform for that group. **Fairness** is the broader goal of avoiding unjust outcomes across groups and contexts. It’s not always one single metric; it’s a design and evaluation choice. A practical example: you build a resume screener based on historical hiring decisions. If the historical decisions were biased, the model can “learn” that bias and repeat it—at scale. ### Fit meets ethics (where people get surprised) A model can be “well-fit” on average and still be unfair for specific populations. That’s why you’ll sometimes evaluate performance by segment (by region, language, age range, device type, etc.), not just one overall score. ### Common AIF-C01 traps to avoid - Confusing **training** (“learning from labeled data”) with **inference** (“making a prediction right now”). - Treating “AI” and “ML” as interchangeable when the question is clearly describing a specific technique. - Assuming generative AI/LLMs are always the best answer (sometimes you just need classification, extraction, or search). - Forgetting that changing a **prompt** changes output *this time*, not the underlying model. ### Quick recap (terms to keep straight) - AI: the broad goal of smart behavior - ML: learning from examples - Deep learning: ML with neural networks (many layers) - Algorithm: the learning method - Model: the trained artifact you deploy - Training vs inference: learning vs using - Fit: how well the model matches reality without over/underfitting - Bias & fairness: risk + responsibility, not just accuracy - LLM: a language-focused generative model You don’t need to become a researcher for AIF-C01—you just need to speak the language clearly and spot what’s happening in a scenario. --- ## About Upcert Upcert.io provides industry leading, high quality practice exams for cloud certifications. Its platform allows users to study more efficiently by focusing on content where the study needs, and skipping content that the user already knows. Its also provides highly customized exam and certification readiness checks. Not sure if you're ready for your AWS exam? Create a free account to get access to 100 practice questions and 3 mock exams to help you find out. No credit card required. Sign up for free: https://upcert.io/signup