A letter from Steve Trust, Director of Technology
While Artificial Intelligence (AI) tools are a relatively new development in the education space, particularly the widespread adoption of LLM’s (large language models), the use of technology by teachers and students at CRS has a strong history of thoughtful, intentional integration and usage. While there will always be new technology for us to investigate and prepare our students to use (coding, makerspaces, multimedia, internet publication and social media) embedding our school’s values of academic excellence and the joy of childhood ground our pedagogical approach.
For AI in particular, the landscape is changing rapidly based on the pace new models are being introduced, and bringing with them new capabilities and modes of work. Most recently, a group of CRS educators and administrators met over the summer to develop responsible AI guidance for faculty and students, create an outline for AI literacy across different age groups, and study best practices from other institutions. For CRS, AI literacy encompasses the following:
- Understanding how AI works
- Using AI responsibility
- Recognizing its social and ethical implications
- Understanding AI’s potential benefits and risks, and how to mitigate those risks
Based on research and our own experience as educators, we developed age appropriate guidelines for the types of tools and interactions we want to see used with students.
- For grades PreK-3, we do not want students directly interacting with LLM chatbots – while there is some early promise in AI tutoring tools, the guardrails are not yet fully in place for young children interacting with AI. In this age range, it is more appropriate for them to understand that AI is simply a digital tool or “assistant” (Alexa, Siri), and importantly that while it can sound human, it is still a computer and can give incorrect information.
- In grades 4-5, we shift the focus to learning about AI in the same context as other digital media literacy such as social media and the internet. We take a “walled garden” approach, using the tools with direct instructor supervision for particular assignments designed to introduce students to the tool, and in the case of AI, using a model specifically trained on data appropriate for student interaction. Examples might include asking the AI to interact as a historical figure, or a character in a book, as a way for students to gain greater context and a deeper understanding of their motivations.
- In grades 6-8, the curriculum shifts to directly using AI tools both for an understanding of how they are used in the real world for work, and how to use them responsibly. In middle school, the recommendation is for teachers to put in clear, specific AI guidance per assignment/unit of study. In education, many types of student work are explicitly designed to develop an underlying conceptual framework for long term understanding, and bypassing that learning for a quick solution can lead to long term misconceptions and a lack of critical thinking. To prevent this, we are adopting a “stoplight” framework for AI guidance, with a “red light” indicating no AI usage is permitted and that this assignment is important for learning, a “yellow light” means can use with specific permissions from the teacher, and a “green light” meaning students are encouraged or expected to use AI tools.
The early units of our 6th-grade technology curriculum offer a window into our pedagogical approach to AI at CRS. We kick off the year in tech class by giving sixth graders some contextual background in the history of machine learning and how large datasets are used to train AI models for specific tasks. We start by training an image recognition model to play a game of “rock, paper, scissors” using individual student hands as models. Very quickly, students learn that the images the models are trained on make a huge difference in their efficacy, with everything from the lighting in the room, the backgrounds of their photos, and using different people’s hands all making their models either more or less accurate.
Sixth graders then shift to facial expressions, and students try to have the model recognize a variety of emotions. This is where we begin to not only teach students about how AI works, but also explore where bias can be introduced into the model’s training data. For example, students discover very quickly that a model trained on students with only long hair regularly fails when presented with students with short hair. As a class, we discuss the ethical questions of using AI tools, such as issues of bias and discrimination, data privacy and security, and the creation and spread of misinformation. Later, we will learn to use AI tools in multimedia, coding, and other curricular areas while also investigating the ethical implications for art, authorship, and work.
As we continue to explore the possibilities of AI and other emerging technologies, our goal at CRS remains the same: to help students grow as curious, capable, and conscientious learners who can navigate a rapidly changing world with confidence and creativity. By grounding our approach in both innovation and our enduring values, we ensure that technology serves as a tool for deeper understanding rather than a shortcut around it. Our goal is to prepare our students not just to use AI, but to think critically about its impact and to shape its future responsibly.
In partnership,
Steve
