Analysis by: Jane Castleman, Sam Hafferty, Steven Kelts, Arvind Narayanan, Francesco Salvi, Hilke Schellmann
Edited by: Sam Hafferty, Steven Kelts, Arvind Narayanan, Hilke Schellmann
The Trump administration has made artificial intelligence a centerpiece of its economic agenda, promising to retrain a workforce it says must be ready to compete in an AI-driven future. One early piece of that effort: a free text-message course from the Department of Labor (DOL) and private partner Arist called, “Make America AI-Ready”, is a useful start on the journey to AI literacy for all Americans. This seven-day long, 10-minute-per day course which frames itself as “your AI 101” is accessible, technically informative, and engaging (see below for the full contents). Here we analyze its strengths, lay out a few weaknesses we think should be addressed in the current version, and elaborate some stretch goals for an “AI 201” course that would build upon the original.
We evaluated this course on technical accuracy; framing and ideology; scope; commercial entanglements; agency; and pedagogical quality.
What Does It Do Well?
It’s accessible. The choice of SMS for delivery maximizes reach. It meets people where they are, requiring no app installation, account creation, or navigating unfamiliar web platforms. The 10-minute-a-day pacing is practical.
It emphasizes verification of AI outputs. The course consistently emphasizes that AI output must be checked, not blindly trusted. The example of looking up a restaurant only to find out that a nail salon has opened in its place is memorable (Lesson 6, below). The course also thoughtfully extends this skepticism to AI-generated images, video, and audio.
It centers human responsibility. The quiz question about a coworker submitting an AI-generated report with fabricated statistics (Lesson 2, below) returns a sensible response: the human is responsible. This is repeated throughout the course and is one of its most important messages.
It’s honest about AI’s limitations. The course doesn’t shy away from the fact that AI can be confidently wrong. The term “hallucination” is introduced clearly, the concept of training data cutoffs is explained, and the course repeatedly emphasizes that AI predicts rather than knows or understands. For a 101-level course, this is appropriately calibrated.
What could be fixed in AI 101?
There are some things we’d recommend fixing about the course.
The course repeatedly contradicts its own privacy and security advice.
The course contains a serious inconsistency when it comes to data privacy and security. On the last day of the course it offers common-sense advice, stating “PROTECT your private info. Never share passwords, Social Security numbers, medical records, or confidential work data with AI tools,” later adding not to share “income data.” But some of the advice and exercises leading up to that point had already prompted users to input some of these “never share” types of data.
- On Day 3, the course urges the user to input a photo, PDF or recording of their own voice.
- On Day 4, it says that a “power move” is for users to “give AI your own data to work with,” including instructions to “paste your resume” and “share your monthly expenses.”
- On Day 5, the course says that a good use case for AI is putting “medical symptoms” in to learn medical terms and prepare questions for a doctor.
- On Day 6, it tells the user to share their address to find a restaurant near them.
These self-contradictions expose a central tension: AI tools can be more useful when they know more about you, so a blanket prohibition against sharing private information will limit their usefulness. Unfortunately, there is no simple answer to the question of how to protect your privacy when using AI, and there is no single approach that will work for everyone. It requires critical thinking based on an understanding of different threat models, including prompt injection risks, traditional cybersecurity risks, legal risks, AI companies’ eagerness to train on user data, and workplace policies that of course vary between organizations.
We recognize that this level of nuance would be too much for an introductory course. We would recommend that the privacy protection lesson come earlier in the course, and include information about privacy settings that AI tools offer, such as temporary or incognito chats. Instead of the “never share” language, giving people at least a rudimentary understanding of what could go wrong would be more helpful, along with links to resources where they can learn more.
The quizzes adopt a right-wrong dichotomy
The quiz questions often ask the user for an explanation of AI’s failure modes and social effects. While it is important to face these head-on, the questions consistently have one “obviously correct” answer that maps to the course’s framing. Several wrong answers are absurd strawmen (“AI likes making things up to test you,” “AI’s internet connection was slow”). This limits the potential to build genuine understanding or critical thinking about AI’s functioning and societal implications.
We would recommend an approach that highlights known issues without pretending that the explanations are simple. Flexibility in how issues are framed will allow course participants to grapple with them in a manner that is relevant to the skills they are building. More open-ended quiz questions might include: “Your employer starts mandating that all workers use AI. This may enable your employer to monitor your productivity. What are your options?” or “You are about to apply for a loan. How can you find out whether and how AI will be used in evaluating your application?”
What could DOL build upon in AI 201?
Expanding upon the introductory materials in the 101 course, there are several opportunities for content development that we would recommend.
The course misses how AI is reshaping work
For a course that is offered by the Department of Labor, there is very little content on the subject of work — the course frames AI solely as a productivity tool workers can use. The Department of Labor exists to protect workers, their wages, their safety, and their rights, yet the course largely skips over the ways AI is already reshaping hiring, performance monitoring, and layoffs of workers across many sectors.
An AI 201 course could provide more information on these, and inform citizens of legitimate reasons they may have to call for regulation. It could also go into more depth on the privacy question. Finally, AI 201 could reckon with the broader societal consequences of this technology: for instance, bias, surveillance, and the concentration of power in the hands of a few large technology companies. Workers who understand these dynamics are not just AI-literate; they are better equipped to advocate for themselves.
Deepening Technical Explanations
The 101 course keeps its terminology simple, which is important. But sometimes it oversimplifies. An AI 201 could deepen the explanation of how models are trained, make inferences, and deliver human-interpretable results.
The course’s technical explanation — AI finds patterns and makes predictions — serves as the entire mental model. This framing makes AI sound more mechanistic and less opaque than it actually is. On day 3, the language of pattern and prediction drops out, with the language of “instruction” and “results” substituting in for the human input and predicted output of AI. The current course also equates predicting with guessing and AI training with “studying” – analogies that might be a useful starting point, but are quite limiting.
For an AI 201 course, the connections between AI learning, model weights and predictions – as well as the connections between all of these things and the results generated from instructions – could be deepened. Indeed, how AI can be biased, can hallucinate, and otherwise can make errors is easier to comprehend when one understands a bit of the math behind machine learning.
More Active Learning Engagement
The quizzes in AI 101 are based on reputable learning science. Often the quiz will introduce a new concept or ask the user to stretch what they just learned to cover a new situation. There’s good evidence to think that this sort of “pre-assessment,” followed quickly by lessons teaching the correct answer, does improve retention in general.
But as we said the AI 101 quiz questions consistently have one “obviously correct” answer that maps to the course’s framing, limiting the potential to challenge the user’s understanding. Additionally, we found minimal tailoring of text-message responses to the user’s quiz answers, despite the affordances of the interactive platform. If one user selects what is considered a right answer while another selects a wrong one (we tested this), the course responds with similar if not identical information. Better quizzes in AI 201 could perhaps be assessed by an LLM, with adaptive responses that meet the user where they are, and stretch their understanding when they’ve acquired a solid base.
The daily challenges in AI 101 (Quick Draw, Udio music generation, fridge photo recipes) are well-designed to get people past the intimidation barrier. They’re low-stakes, fun, and demonstrate AI capabilities concretely. But for AI 201 they could be more effectively leveraged to actually show people how AI can be useful in their work and daily lives, and can (as promised by AI 101) “save them 5 hours per week”.
Who created the course, and how?
The DOL’s press release announcing the course points to a collaboration with a private partner called Arist. Arist’s website at the time of writing states that “Arist is the #1 enablement AI. Arist’s agents orchestrate creation, delivery, and analytics, end-to-end.”
While the DOL announcement gives little detail as to the nature of the collaboration, if the company co-developed actual course content using generative AI this fact should be disclosed. One of us ran selected course content through Pangram, a tool which purports to detect AI content, and the results came back suggesting it was 100% AI-generated. Without putting too much stock in that, we began to suspect that some of the faults in the course could be explained this way. The simplistic framing of how AI generates results (patterns/predictions, instructions/results) could come from AI: since LLMs are trained on old explanations of how LLMs work, they may reach for framings that are not up-to-date. Also, if each module/quiz was generated separately, that could explain abrupt changes in terminology and the contradictions we identified regarding the sharing/not sharing of private information. The use of AI for content creation isn’t a problem per se; but the failure to disclose left a missed opportunity for a teachable moment on the utility and risks associated with generative content. Also, the contradictions in regards to security and privacy, which we discussed earlier, should have been caught by human oversight.
Additionally, going forward, transparency about how commercial partners are involved can lend itself to wider adoption and trust of course materials and DOL initiatives. The final lesson of the course refers users to an Arist-sponsored AI summit featuring Tony Robbins and Dean Graziosi. While the Summit appeared to be free, it raises the question of what other paid AI-enablement sessions or products these well-known coaches might offer. Graziosi has drawn attention for his role in other problematic training programs. Users deserve to know who benefits from pursuing the recommendations made by a Federal agency.
Conclusion
Make America AI Ready offers significant insight into the priorities the Federal government holds in reaching widespread AI-literacy across the United States workforce. Although we suggested several areas for development, the course content and manner in which it was released are a useful start in achieving this aim.
Appendix: Full Course Content
Complete text of all 7 lessons. Click a lesson to expand.
March 25, 2026
Core Content
AI is already working for you. You probably used AI before you finished your morning coffee today and didn’t know it.
- Google Maps quietly dodging the nightmare traffic jam? AI.
- Netflix guessing the next series to binge (and being weirdly right)? AI.
- Your phone suggesting the rest of your text? Also AI.
How it works: AI (Artificial Intelligence) is a system that looks at massive amounts of data, finds patterns, and makes predictions. That’s the secret: PATTERNS IN, PREDICTIONS OUT.
There’s a newer kind of AI called GENERATIVE AI that doesn’t just predict; it creates. Give it short instructions (aka “prompt”), and it can draft an email, explain a confusing form, or help plan your week.
When most people say “AI” today, they’re usually talking about generative AI. In this course, “AI” is used broadly, but the main focus is generative AI.
Course Overview
- Learn how to give AI clear instructions, quickly judge its output, and know when it saves time vs. when to do it yourself.
- Hands-on practice with ChatGPT, Claude, Gemini, or Grok.
- Goal: Make AI feel less like a mystery and more like a tool you actually want to use.
Key Concepts
- AI doesn’t actually “know” anything. It PREDICTS.
- When you ask AI to write a birthday message, it’s not pulling from a file labeled “birthday messages.” It’s predicting which words most likely follow other words, based on patterns from billions of examples.
- AI can sound incredibly confident while being completely wrong. AI isn’t lying; it’s guessing, and sometimes the guess is bad.
Quiz Questions
Q1: Imagine your friend says, “AI knows everything, it’s like a genius that’s always right.” What’s the best response?
- A: “That’s true, AI has access to all the world’s information, so it knows its stuff.”
- B: “AI is just Google with a nicer interface and a personality.”
- C: “Not exactly, AI makes predictions based on patterns, and it can sound confident but be dead wrong.” (Correct)
Q2: Which is the BEST example of generative AI at work?
- A: A fitness tracker counting your daily steps
- B: A digital assistant that drafts an email for you based on a few sentences you provide (Correct)
- C: A calculator solving math problems
- D: Telling Alexa to turn on your lights
Challenge of the Day
Check out the Quick Draw game by Google. Draw what it asks and watch AI guess what you’re sketching in real time.
Further Reading
March 26, 2026
Core Content
AI learns like you do (minus the smoke alarm). Here’s the process in 3 steps:
Step 1: STUDY — AI studies millions of examples (websites, how-to videos, artwork, recipes) and looks for patterns. There are patterns in everything: how Shakespeare structures his plays, how Justin Bieber writes his lyrics, how recipes mix ingredients.
Step 2: PREDICT — After studying, AI makes a smart guess about the answer. AI doesn’t “know” the answer. It predicts. The system doing this pattern-finding and predicting is called a NEURAL NETWORK.
Step 3: IMPROVE — After guessing, AI is rewarded for correct answers and learns from mistakes. When you like a response, it learns that approach worked. When you don’t, it learns to try something different.
This whole process is called MACHINE LEARNING. It’s learning from trial and error, at a scale and speed no human could match.
Key Concepts
- AI doesn’t search the web when you ask a question (unless it has a specific search feature enabled). It recognizes patterns from topics and blends them into something new.
- AI learns from human data, so it picks up our patterns, along with our biases, blind spots, and outdated info. It reflects the world it was trained on, not always the world today.
- Your judgment always matters. Think of it like GPS — if something changes and no one updates it, you might get directions straight into a lake.
- AI sometimes states things that are false with total confidence. It might invent a fake statistic or cite sources that don’t exist. This is called a HALLUCINATION. Think of it like autocomplete on overdrive.
Quiz Questions
Q1: You ask AI to “explain budgeting tips using a cooking analogy.” What is AI doing?
- A: Searching the internet for an article that combines budgeting and cooking
- B: Connecting patterns from budgeting and cooking content to generate a new explanation (Correct)
- C: Copying a budgeting article and replacing certain words with cooking terms
Q2: If AI’s learning process is spotting patterns, predicting what comes next, and improving with feedback… what’s the best way to work with AI?
- A: Give clear instructions and helpful feedback (Correct)
- B: Expect perfect results immediately
- C: Only use it to look up facts
Q3: AI sometimes states things that are false with total confidence. Why does this happen?
- A: AI is guessing and sometimes it’s wrong, but it doesn’t know that (Correct)
- B: AI likes making things up to test you
- C: AI’s internet connection was slow
Q4: A coworker uses AI to write a report and submits it without reading it. The report contains a made-up statistic. Who is responsible?
- A: The AI tool
- B: The company that made the AI
- C: The coworker who submitted it (Correct)
Challenge of the Day
Open a free AI tool and use this prompt: “Roast me in the format of a movie trailer voiceover: I’m a [role], from [place], and my biggest work pet peeve is [x]. Be very funny (appropriately).”
Further Reading
March 27, 2026 (10:00pm)
Core Content
AI is powerful, but it’s not psychic. If you give it a vague instruction, you’ll get a vague result.
The instruction you give AI is called a PROMPT. Think of it like ordering food:
- “I want food” → You might get a raw turnip.
- “Medium pepperoni pizza, extra cheese, well done” → Now we’re talking.
AI works the same way:
- “Write something about my business” → generic mush.
- “Write a 3-sentence pitch for my landscaping business that targets homeowners in Dallas” → something you can actually use.
The difference isn’t luck. It’s the prompt. Better results are completely in your control.
Key Concepts
- AI uses your words as instructions to generate something new. Clearer instructions produce more useful results.
- Length isn’t the goal. Clarity is. A 10-word prompt can outperform a 200-word ramble.
- AI doesn’t just retrieve (like a search engine); it creates using your words as instructions.
- Prompting isn’t about tech skills. It’s about communication.
- Prompting isn’t just typing — you can use images, documents, and voice.
- Think of communicating with AI like talking to a helpful coworker: be clear and specific about what you need.
Example Prompts to Try
- “Give me two meal prep ideas I can cook in under 30 minutes with chicken and rice.”
- “List five side hustle ideas for someone who’s good with cars.”
- “Create a 4-line pep talk I can read before a job interview.”
Quiz Questions
Q1: Why does the quality of your prompt matter so much?
- A: AI ranks users by prompt quality and gives better service to higher-ranked users.
- B: Longer prompts always produce better results.
- C: AI uses your words as instructions to generate something new, so clearer instructions produce more useful results. (Correct)
Q2: A friend says, “I tried AI once and it gave me useless junk.” What’s the most likely problem?
- A: The AI tool they used was broken
- B: AI just isn’t useful for regular people.
- C: Some people are just naturally bad at using technology.
- D: Their prompt was probably too vague, so AI didn’t have enough to work with. (Correct)
Q3: How should you think about communicating with AI?
- A: Like talking to a computer, keep it short and robotic
- B: Like talking to a helpful coworker, be clear and specific about what you need (Correct)
- C: Like using a vending machine, press a button and hope for the best
Challenge of the Day
Create a funny song with AI using Udio. Describe someone you know, pick a genre, and add fun details.
Further Reading
March 28, 2026
Core Content
A prompt isn’t just a question. It’s a set of instructions. A strong prompt has 3 parts:
- GOAL: What do you want AI to do? — Write, summarize, plan, explain, compare, organize.
- CONTEXT: What should AI know about your situation? — Who it’s for, what you’ve tried, relevant details.
- EXPECTATIONS: What should the result look like? — Length, tone, format, number of items.
Think of it like calling a contractor:
- “Fix my house” → chaos.
- “Replace the broken tile in the upstairs bathroom, match the existing white subway tile, budget under $200” → now you’ll get what you need.
The more specific your prompt, the less time you spend going back and forth with AI. One great prompt can save you ten “that’s not what I meant” follow-ups.
Key Concepts
- POWER MOVE: You can give AI your own data to work with. Paste in text, upload documents, or share lists, and ask AI to DO something with them.
- Paste your resume → “Rewrite this for a nursing assistant position.”
- Paste a confusing email → “Explain what they’re asking me to do.”
- Share your monthly expenses → “Find my 3 biggest spending categories.”
- You already think this way when you explain things to people. Prompting is just doing that in writing.
Quiz Questions
Q1: Which prompt would give you the best result for understanding a long report?
- A: “Help me with this document.”
- B: “Can you tell me what this is about?”
- C: “Summarize this report into 3 key insights, and suggest 1 action item I could propose in a team meeting.” (Correct)
Q2: You’re planning a trip on a tight budget. Which prompt will get you the most useful result?
- A: “Plan a trip for me.”
- B: “What’s a good vacation spot?”
- C: “Suggest a 4-day road trip within 5 hours of Nashville for 2 adults on a $600 budget, including free or low-cost activities.” (Correct)
Q3 (Open-ended): “Write a workout plan.” What’s one way you could improve it?
- Example strong prompt: “Create a 4-day workout plan (GOAL) for a beginner (CONTEXT), using 20-minute at-home exercises (EXPECTATIONS).”
Challenge of the Day (Choose one)
- Open the fridge, snap a photo, and prompt: “Give me 2 dinner recipes using what’s in my fridge and give them Michelin star menu names.”
- Snap a photo of your garage, closet, or bedroom and prompt: “Review this space like a disappointed interior designer. Tell me how to organize it using only what’s here.”
Further Reading
March 29, 2026
Core Content
AI can help with things that feel overwhelming: step-by-step repair guidance, breaking down confusing forms into plain language, learning the basics of an unfamiliar task in minutes.
5 Roles AI Can Play
- PRODUCTIVITY HELPER: Draft emails, summarize long documents, outline a presentation so you can focus on refining.
- RESEARCH ASSISTANT: Ask questions, gather info, or create quick learning materials tailored to what you need.
- CREATIVE PARTNER: Generate first drafts of writing, design ideas, or images that you can edit, improve, or make your own.
- TASK HELPER: Troubleshoot a leaky faucet, learn Excel formulas, or translate something into another language.
- DECISION SUPPORT: Compare options or weigh pros and cons while YOU make the final call.
One AI tool can play all 5 roles. You just change your prompt.
AI Tool Categories
- Chatbots (ChatGPT, Claude, Gemini, Grok): Draft content, answer questions, brainstorm, or role-play conversations.
- Research Assistants (Perplexity, NotebookLM, Elicit): Dig deeper, summarize sources, or explore different perspectives.
- Creative Tools (DALL-E, Midjourney, Canva): Create images, edit photos, or design quick graphics.
- Data Helpers (Julius, Datawrapper, Flourish): Analyze numbers, generate formulas, visualize data.
Key Concepts
- AI complements your skills rather than replacing them. AI provides structure and ideas; your judgment makes them meaningful. That’s the AI + human formula.
- The higher the stakes, the more important YOUR judgment becomes.
- AI gives you the 80%. Your personality and judgment provide the 20% that makes it most relevant for your context.
- Regenerating without adding new information is like shuffling the same deck of cards hoping for a different game. The fix is adding YOUR context and details.
Quiz Questions
Q1: What’s the best example of AI complementing your skills?
- A: AI drafts practice interview questions, and you refine your answers using your own experiences. (Correct)
- B: AI decides the career you should pursue.
- C: AI generates financial advice and you follow it without checking.
Q2: You ask AI for advice on a medical symptom. It gives you a detailed, confident answer. What’s the best approach?
- A: Follow the advice, AI has medical knowledge
- B: Ignore it, AI can’t know anything about health
- C: Use it as a starting point to learn and prepare questions, but see a real doctor for decisions (Correct)
Q3: A small business owner drafts a social media post with AI. The draft is good but generic. What should they do?
- A: Post it as-is, AI knows best.
- B: Edit it to add personal details and the bakery’s unique voice. (Correct)
- C: Ask AI to rewrite it 10 more times until one is perfect.
Q4 (Poll): If you had AI as an assistant for one week, which role would you assign first?
- A: Organizer (managing schedules, to-dos, plans)
- B: Researcher (summarizing info, finding insights)
- C: Creative partner (drafting content, brainstorming ideas)
- D: Coach (role-playing, teaching, giving feedback)
Challenge of the Day
Think of a challenge you’re facing at work. Drop it into your favorite AI tool, add details, and say: “Give me 3 ways to approach this. Put the pros and cons in a table and recommend next steps.”
Further Reading
March 30, 2026 (6:01pm)
Core Content
You’ve learned how to talk to AI. Now let’s talk about what to do with what it gives back. Great results come from reviewing, refining, and improving. This applies to both what you generate with AI and what you see online.
4-Point Evaluation Checklist
- ACCURACY: Is this actually true? Verify facts, names, stats, and anything you’d hate to get wrong.
- COMPLETENESS: Does this cover everything I asked for? Compare the response to your prompt. If something’s missing, tell AI exactly what to add.
- RELEVANCE: Does this fit MY goal? Tailor it or ignore what doesn’t apply.
- SOUNDNESS: Does this make sense as a whole? Watch for advice or content that sounds smart but falls apart in real life.
Key Concepts
- AI can generate images, videos, and audio that seem incredibly real. Don’t take it at face value; review it.
- Check the source: Where is this coming from? If not from a trusted source, be skeptical.
- Check the details: Look for small inconsistencies like strange hands, warped text, unnatural movement or audio.
- AI’s knowledge has a cutoff date and doesn’t automatically update. It can confidently recommend a restaurant that is now closed. Always verify time-sensitive info: hours, locations, prices, etc.
- If the output misses the mark, don’t start over. Just iterate:
- “Explain this like I haven’t had my coffee yet.”
- “Make this cheaper — I’ve got bills.”
- “Make this sound like I know what I’m doing.”
- Think of it as a conversation, not a one-shot deal.
- Don’t just accept the output — give AI your specific context (lists, photos, constraints) for better results.
Quiz Questions
Q1: When evaluating AI’s output, what are the best questions to ask? (Select all that apply)
- A: Does this answer what I asked, without major gaps? (Correct)
- B: Did the AI respond in under 10 seconds?
- C: Is this actually useful for my situation? (Correct)
- D: Can I trust this information? (Correct)
Q2: You ask ChatGPT for the best Mexican food near you. You show up hungry. It’s a nail salon now. What happened?
- A: ChatGPT is personally trolling you
- B: ChatGPT was trained on data up to a certain point, so it had no idea the place closed and got replaced (Correct)
- C: ChatGPT has never eaten a taco in its life and cannot be trusted
- D: You should have asked Yelp
Q3: AI plans a week of dinners but it requires ingredients you don’t have. What should you do?
- A: Pick a few meals that seem useful and ignore the rest
- B: Ask AI: “Can you make this better?”
- C: Share what you have (list or photo) and prompt: “Use these ingredients.” (Correct)
Challenge of the Day
Open your go-to AI tool and ask: “What are the best restaurants in [your area]?” Then iterate: redirect, fill gaps, clarify, tighten, and fact-check the results.
Further Reading
March 31, 2026
Core Content
AI is a power tool, not a magic wand. How you use it matters. Being a smart AI user comes down to 4 things:
- PROTECT your private info. Never share passwords, Social Security numbers, medical records, or confidential work data with AI tools.
- VERIFY before you act on it. Especially for anything high-stakes.
- USE YOUR JUDGMENT. Ask yourself: Does this make sense? Is it appropriate? Would I stand behind this if someone asked about it?
- PARTNER, DON’T REPLACE. AI supports your work. It doesn’t replace your responsibility. You decide when to trust it, when to tweak it, and when to toss it out.
Key Concepts
- “Sounds professional” and “is accurate” are two very different things.
- When you treat AI as a partner, it can help you learn faster, open up new career and creative opportunities, and support clearer communication.
- AI tools will keep changing, but the skills built in this course (clear prompting, critical evaluation, good judgment) will always matter.
- Confidential data should NEVER go into external AI tools. Once you enter it, you may lose control of how it’s stored or used. Rule of thumb: share the task, not the secrets.
- Match your caution to the consequences:
- LOW STAKES: Brainstorming, meal planning, creative ideas. Imperfect output? No real harm.
- MEDIUM STAKES: Work emails, presentations, planning. AI saves time, but review before using.
- HIGH STAKES: Medical, legal, financial, or safety decisions. Use AI to explore options, but always verify.
Quiz Questions
Q1: You prompt an AI tool and it gives you a detailed response. What’s the most responsible next step?
- A: Share it immediately because it sounds professional
- B: Check the facts before using or sharing the information (Correct)
- C: Stop using AI because it might be wrong
Q2: Which of these should you NOT enter into an AI tool?
- A: “Explain common tax deductions I might qualify for.”
- B: “Help me understand this tax form in plain English”
- C: Your Social Security number, full income details, and account numbers (Correct)
Final Challenge
Pick 1 area of your work or life where AI can genuinely help. Open your favorite AI tool and describe the area, then add: “Create a plan for how I can use AI to help me with this. Include the use cases to start with, simple ways to make it stick, and next steps to get started. Tailor it to someone who [describe your job/life].”
Further Resources
- OpenAI Academy
- Microsoft Generative AI for Beginners
- AI Advantage Summit by Tony Robbins and Dean Graziosi (April 23-25, online)
- Explore career possibilities with AI: Google Career Dreamer


Leave a Reply