SOLID STATE PRESS
← Back to catalog
Prompt Engineering cover
Coming soon
Coming soon to Amazon
This title is in our publishing queue.
Browse available titles
Artificial Intelligence

Prompt Engineering

A High School & College Primer on How to Talk to AI Models Effectively

You type a question into ChatGPT and get back something vague, wrong, or weirdly formatted. You try again, get something different, and still aren't sure why. If that loop sounds familiar, this book is for you.

**TLDR: Prompt Engineering** is a concise, practical primer that explains how large language models like ChatGPT, Claude, and Gemini actually work — and how to write prompts that get accurate, useful responses the first time. In about 20 pages, you'll learn why these models predict text one token at a time (and why that matters for how you phrase requests), what separates a weak prompt from a strong one, and which named techniques — few-shot examples, chain-of-thought reasoning, task decomposition — reliably improve results on hard problems.

The book also covers the failure modes every user needs to know: why models fabricate facts, how vague instructions produce vague answers, and what prompt injection attacks look like. A final section treats prompt-writing as an engineering skill — showing you how to test, compare, and refine prompts systematically, including using the model to critique its own output.

This guide is written for high school and early college students who want a working understanding of AI prompt writing for beginners — no math background required, no jargon left undefined. It's short by design: tight enough to read in one sitting, deep enough to actually change how you use these tools.

Pick it up, read it once, and write better prompts today.

What you'll learn
  • Understand what a large language model actually is and why prompts shape its output
  • Write clear, specific prompts using role, context, task, and format conventions
  • Apply core techniques like few-shot examples, chain-of-thought, and decomposition
  • Recognize and avoid hallucinations, ambiguity, and prompt injection pitfalls
  • Iterate on prompts systematically rather than guessing
What's inside
  1. 1. What a Language Model Is Actually Doing
    Explains tokens, next-token prediction, and why this mental model dictates how to prompt effectively.
  2. 2. Anatomy of a Good Prompt
    Breaks down the core ingredients — role, context, task, constraints, format — with side-by-side weak and strong examples.
  3. 3. Core Techniques: Few-Shot, Chain-of-Thought, and Decomposition
    Teaches the named techniques that reliably improve answers on reasoning, formatting, and complex tasks.
  4. 4. When Models Lie: Hallucinations, Ambiguity, and Injection
    Covers the main failure modes — fabrication, vague answers, prompt injection — and concrete ways to guard against each.
  5. 5. Iterating Like an Engineer
    How to refine prompts systematically: testing, comparing versions, and using the model to improve its own prompts.
Published by Solid State Press
Prompt Engineering cover
TLDR STUDY GUIDES

Prompt Engineering

A High School & College Primer on How to Talk to AI Models Effectively
Solid State Press

Who This Book Is For

If you're a high school or early college student who has wondered how to write better ChatGPT prompts, a teen who keeps getting vague or wrong answers from AI tools, or a teacher building a unit on artificial intelligence literacy, this book is for you. It also works for parents helping their kids navigate AI-assisted homework and tutors who want a concise prompt engineering guide for beginners.

This book covers the large language model basics students need — what these models actually do when you send a message, how to structure a prompt for clarity and precision, core techniques like few-shot examples and chain-of-thought reasoning, and how to avoid AI hallucinations in answers. It is roughly 15 pages, with no filler.

Read straight through once to build the mental model. Then work the examples inline — each one is designed to show you concretely how to get better answers from AI chatbots. Think of it as a ChatGPT study guide for teens and college students who want practical skills, not theory.

Contents

  1. 1 What a Language Model Is Actually Doing
  2. 2 Anatomy of a Good Prompt
  3. 3 Core Techniques: Few-Shot, Chain-of-Thought, and Decomposition
  4. 4 When Models Lie: Hallucinations, Ambiguity, and Injection
  5. 5 Iterating Like an Engineer
Chapter 1

What a Language Model Is Actually Doing

Every time you send a message to ChatGPT, Claude, or Gemini, the model does not "think" the way a person does, look up a database of facts, or follow a set of rules someone typed in. It does one thing, repeated thousands of times in rapid succession: it predicts the next most likely piece of text given everything that came before. Understanding that single fact will change how you write prompts.

Large language models (LLMs) are software systems trained on massive amounts of text — books, websites, code, articles — to become very good at that prediction task. They learned, statistically, which words tend to follow which other words across an enormous range of human writing. When you ask a question, the model is not retrieving a stored answer. It is generating text, one piece at a time, that looks like what a good answer would look like based on patterns in its training data.

Tokens, Not Words

The unit the model operates on is not a word — it's a token. A token is a chunk of text, usually two to four characters, that the model's vocabulary recognizes as a unit. The word "running" might be one token. The word "unbelievable" might be split into two or three tokens: "un", "believ", "able". Punctuation, spaces, and even parts of numbers each count as tokens separately.

Why does this matter to you as a prompt writer? Because the model processes your entire prompt as a sequence of tokens, and it treats the boundaries between them the way a reader treats syllables — they carry meaning, and unusual splits can affect interpretation. More practically, every model has a context window: a hard cap on the total number of tokens it can hold in a single conversation, counting both your input and its output. GPT-4 Turbo, for instance, supports around 128,000 tokens — roughly 96,000 words. Older or smaller models may cap out at 4,000 to 8,000 tokens. Once you hit that limit, the model cannot "see" earlier parts of the conversation anymore. Long conversations can therefore cause the model to forget instructions you gave at the start, which is a concrete reason to put critical instructions early and repeat them when needed.

Next-Token Prediction

Here is the core loop. Given a sequence of tokens (your prompt plus whatever the model has already written), the model assigns a probability to every token in its vocabulary for what should come next. It then picks one — often the highest-probability token, but not always — appends it, and repeats. A response that is 200 words long required the model to make roughly 300 or more of these individual predictions.

Keep reading

You've read the first half of Chapter 1. The complete book covers 5 chapters in roughly fifteen pages — readable in one sitting.

Coming soon to Amazon