2.2 Learn how to Prompt using Google’s Say What You See
Introduction:
One of the most challenging aspects of teaching someone how to prompt a Large Language Model (LLM) is understanding that prompting is a highly iterative process. A user needs practice with a number of knowledges to get a competent output. This is where Google’s Say What You See shines. It teaches users the basis for prompting an LLM for image generation in progressively challenging rounds that require more focus and accuracy.
What educators have noticed about generative tools and tried to convey to learners is they’re not going to achieve an amazing response without first knowing what they want an output to be. To do that, they need:
Rhetorical knowledge: knowing how to frame a question, query, or prompt.
Content knowledge: understanding the subject they are prompting an LLM
Context knowledge: being able to use natural language to describe with great detail the medium, subject, and put them into context to get a solid response
Directions:
Go to Say What You See and play for ten minutes. Note how far you’ve gotten into the system before the time limit.
Questions For Discussion:
How did the AI's responses change when you altered your descriptions? Were there any patterns or surprises in how the AI understood your prompts?
What types of knowledges did you have to call upon to effectively move forward in the game? Why do these matter?
Discuss the balance between being overly descriptive and too vague in your prompts. How does this affect the AI's response?
How do you think the skills of effective prompting translate to other areas of working with AI or technology in general?
Put It To Use
You can log into your Microsoft Copilot account and use your image prompting skills to generate more precise and detailed images.