April 25, 2026
This one-page summary is also posted as a pdf here. The DOE deadline for feedback is May 8, 2026 via their survey at on.nyc.gov/AiFeedbackNYCPS
The Guidance itself is posted here; our more detailed critique is here and embedded below, along with a partial list of AI programs currently used in NYC schools. Also posted online is an annotated pdf of the AI Guidance with our comments.
On the survey, feel free to borrow any of the below points, provide your own, or write, “I urge you to implement a 2-year moratorium, so rigorous protections can be developed to prevent harm to students, including their privacy, their cognitive development, creativity, mental health and the environment – none of which this guidance sufficiently addresses.”
1: Lack of public input
Despite claims to the contrary, the DOE has not been responsive to the concerns of parents or the community in its determination to rapidly expand the use of AI in the classroom. Nor did this AI guidance document receive significant input from those most affected, either students, teachers or parents. Neither the members of the Data Privacy Working Group nor the AI Working Group appointed by the Chancellor Ramos were allowed to provide comment to the guidance before it was released– despite repeated assurances to the contrary from DOE officials. And from the DOE’s actions, it is apparent that they intend to continue the rapid expansion of AI whatever the official feedback process consists of in the coming weeks.
2: There is no transparency about which AI products can be used, or that when AI is used at all, there needs to be full disclosure
The DOE AI guidance provides no clarity or transparency about which AI products can be used with students, or those that have gone through the DOE privacy vetting process known as ERMA. While the AI Working Group asked for the names of approved products that are currently used in schools, DOE officials refused, saying they had non-disclosure agreements with their vendors. Perhaps as a result, teachers continue to assign students to use off-the-shelf AI products that data-mine personal student information to improve their products –a commercial use specifically prohibited by the state student privacy law. Regardless of which AI tool is used by teachers or students, there needs to be full disclosure as to which program is being employed and for what purpose.
3: The AI guidance fails to rigorously protect student privacy
The DOE privacy vetting process is ineffective and primarily composed of a series of boxes which vendors are merely asked to check off in order to be approved. This process has not worked to protect student privacy, as shown by recent breaches of personal information of over one million NYC students and the continued illegal use of student data for commercial purposes, as indicated by recent court settlements and consent decrees. Although AI represents an even higher documented risk to student privacy and safety, the DOE has developed no additional privacy safeguards for its use – despite recommendations from the Chancellor’s appointed AI working group and others to strengthen this process.
4: The AI guidance is inadequate, often confusing and even contradictory as to how teachers and students should use the technology
Instead, if offers a traffic light metaphor, with most potential applications in the “yellow” category, meaning used with caution, leaving it up to teachers to use their best judgement in most of these cases, without giving them clear direction. Other directives are contradictory – as to whether AI can be used for student placement. One bullet point says no; the other says placement decisions can be overridden by teachers or students – but how can that be done if there’s no clarity that the decision was made by AI in the first place? Many of the thorniest questions as to the proper and safe use of AI are punted, to be dealt with at some unspecified time in the future.
5: There is no attempt in the AI guidance to address many of the most serious concerns that parents and educators have about AI use
Growing evidence shows how AI usage can undermine students’ cognitive development, their acquisition of fundamental skills, weaken their critical thinking and creativity, worsen their mental health challenges, and exacerbate climate change. Yet the guidance does not attempt to address any of those risks. Nor does the guidance provide any answers when it comes to the algorithmic biases often embedded in AI, or the technology’s rampant factual errors, called hallucinations. It also has nothing to say about AI’s tendency towards sycophancy— in which AI chatbots have been designed to agree with the user’s opinions, flatter them, and encourage them in whatever course they are considering, no matter how dangerous it may be. All of these are well-known problems with AI and in the latter case, it has even contributed to teen suicide, according to several ongoing lawsuits. The DOE claims that it will address some of these issues by the end of the year, but there needs to be a moratorium now, so that rigorous protections can be established with public input before the use of AI is further expanded in our schools.
For more information, email us at info@studentprivacymatters.org or check our website at www.studentprivacymatters.org Also, please sign the AI Moratorium Coalition petition at https://tinyurl.com/petitionAImoratorium to be kept up to date on this issue.
“

