The Traffic Light Model - Pros and Cons
The Traffic Light Model has become one of the most widely used academic integrity tools adopted in response to AI use in schools. But the devil is in the details...
What are the advantages of the Traffic Light Model (TLM)?
The primary reason the TLM has taken off is that it is simple. It works with a mental model we already have. In surveys, focus groups, and interviews that our consultancy has conducted, students say that they worry about navigating policies that vary from class to class. That’s a heavy cognitive load on a young person, and the consequences for accidentally breaking the rules can be life altering.
Another reason that this framework has exploded in popularity is that it allows school leaders to satisfy two, competing interests: thoughtfully integrating AI and protecting some forms of traditional assessment. Given the emotionally charged nature of AI and the high variance in faculty attitudes, this is a model that doesn’t rock the boat too much. As anyone who has worked in schools can attest, they are “small c” conservative institutions. Not rocking the boat is usually a high priority.
What are the challenges of the Traffic Light Model (TLM)?
Like any school policy, success is a matter of training, implementation, and monitoring. Here are a few ideas to consider when adopting a TLM for your school. These are the stumbling blocks that may impede successful implementation.
The TLM comes with no clear expectations that a teacher routinely crafts authentic assessments in the green or yellow light categories. For many teachers, it can be hard to imagine ways that AI could be a tool for their students and not just a crutch. Others might be worried that they don’t have enough familiarity to experiment with AI-assisted assessments.
The TLM should not be used as an excuse to protect the status quo. Essays and traditional assessments are not dead, but we have to start thinking of them as parts of a formative learning process and not simply as stand-alone products.
When assessments are created for the green or yellow light categories, what is the broader goal? Having students gain familiarity with LLMs and AI image generators may be a net positive, but what are the core competencies we want to develop? How do we do this in a way that is safe and ethical? Do students know how to document their process and create artifacts of their journey? And when we talk about citing AI, aren’t we essentially just citing a data set that comprises something like 10 terabytes of data? What are the implications of that?
The green light category in the graphic above includes the idea that students be able to use AI “to the degree they feel comfortable.” This is an important concept to grapple with. On one hand, there is real wisdom in AI resistance. LLMs negatively impact carbon emissions, data privacy, media literacy, and a whole host of other ethical concerns. Students may feel that they’ve spent years finding their voice and be unnerved about diluting their authentic expression. At the same time, part of the mission of a school is to prepare its students for the workplace or for the “real world.” How might we think about balancing those valid yet competing interests?
The yellow light category is all about documentation, artifacts of learning, and process over product. Teachers who have engaged in PBL and design thinking will be better suited to craft these sorts of assessments. But many teachers are still tied to traditional assessment, learning which is demonstrated by a single product. It is therefore necessary to pair the TLM with training on some of the core pedagogical principles that support learning as process: metacognitive reflection, productive failure, formative feedback, and transfer.
For more resources, please visit pathosgroup.ai or email me at evan@pathosgroup.ai


