
Empowering Educators in the Age of AI
How to use this toolkit
Whether you are just beginning to explore AI or shaping school or district-wide implementation, you’ll find what you need to lead with insight and for impact.
NEA Task Force Report and Policy Statement on AI in Education
1: NEA Task Force Report and Policy Statement on AI in Education
Discover NEA’s vision for the future of AI in education. In the Task Force Report and Policy Statement, you will find our educator-informed principles, priorities, and recommendations for the ethical and equitable integration of AI in schools.
Overview of AI in Education
2: Overview of AI in Education
Build foundational knowledge of artificial intelligence and its role in education. This section features a glossary of key terms and trusted partner resources that explain how AI works and its broader implications for teaching and learning.
2.1: Glossary of Terms
As artificial intelligence (AI) becomes increasingly integrated into classrooms, curricula, and administrative tools, understanding the language around it is more important than ever.
The intersection of AI and education—from personalized learning algorithms to data privacy regulations—brings both exciting opportunities and complex challenges. This glossary is designed to help educators, students, administrators, and families navigate the evolving landscape of AI in education. Whether you're new to the topic or looking to deepen your understanding, this glossary includes key concepts, technologies, and terms that are shaping the future of teaching and learning.
Algorithmic Bias: “Systematic, unwanted unfairness in how a computer detects patterns or automates decisions,” often based on characteristics and identities such as age, class, culture, disability experience, ethnicity, gender, location, nationality, political affiliation, race, religious background and practices, and/or sexuality.
Artificial Intelligence (AI): Machine-based systems designed around human-defined objectives to perform tasks that would otherwise require human or animal intelligence.
AI Literacy: Understanding what it means to learn with and about AI while gaining specific knowledge about how artificial intelligence works, the skills necessary to master AI tools, and how to critically navigate the benefits and risks of this technology.
Data Governance: A set of practices and policies to formally manage and safeguard data assets throughout a system/enterprise; roles, responsibilities, and processed are defined therein to ensure accountability for and ownership of data assets.
Deepfake: An AI-generated image, video, or audio file that convincingly replaces one person’s likeness and/or voice with another person’s.
Educators: People employed by an institution dedicated to pre-K–12 or higher education.
Generative AI: Artificial intelligence tools that generate text, images, videos, or other content based on existing data patterns and structures.
Machine Learning: A branch of artificial intelligence that uses algorithms to enable computers to learn and make predictions by identifying patterns in data without being explicitly programmed.
Natural Language: Language that has developed through human or animal interaction rather than being constructed, such as with computer code; AI systems that use natural language processing are able to understand this type of language.
Ransomware: When cybercriminals block access to an institution’s computer system until a ransom is paid.
Transparency: Open disclosure of how AI systems work, including how they reach decisions and the data used to do so.
2.2: Partner resources
Code.org Courses: AI 101 for Teachers and How AI Works
Are you curious about AI but not sure where to start? Code.org’s AI 101 for Teachers is a free, self-paced series designed to help you understand what AI is and how you can incorporate it into your teaching practice—no prior experience needed!
Once you're ready to introduce AI to your students, use Code.org’s How AI Works, a seven-lesson curriculum that explores AI fundamentals, ethics, and real-world applications in an accessible format.
AI Report from Education International
As educators, it’s important to understand the potential benefits and risks of AI in education and how you can ensure that teaching with and about AI aligns with the principles of social justice and human rights. Check out Education International’s report “The Unintended Consequences of AI and Education” for an analysis of the current state of artificial intelligence and education.
AI Professional Learning Opportunities
3: AI Professional Learning Opportunities
Enhance your understanding of AI in education through NEA-led learning opportunities. From webinars and micro-credentials to an independent learning course, these experiences are designed to build educator capacity for thoughtful and informed AI integration in schools and classrooms.
AI in Education Webinars
The NEA and the International Society for Technology in Education (ISTE) hosted a webinar series on issues around AI in education.
- Using AI as a Thought Partner to Spark Creativity
- Beyond Algorithms
- A Critical Look at AI’s Role in Special Education
- Enhancing and Redefining Teaching and Learning with AI
Independent Study: Learning About and Teaching with Artificial Intelligence in Education
If you’re interested in learning how to leverage AI tools in your practice, this course explores the foundational concepts of AI and its potential educational applications. You will learn to integrate AI tools effectively and ethically into teaching, learning, and administrative processes while fostering AI literacy among your students.
Micro-Credentials
Coming Soon!
AI Guidance for Schools and Educators
4: AI Guidance for Schools and Educators
Access essential tools and recommendations to support the responsible use of AI in your learning environment. This collection includes guidance on equitable implementation, sample communications, policy templates, and vetted partner resources tailored for educators and school leaders.
4.1: Vetting AI Resources
Artificial intelligence (AI) has the potential to significantly enhance teaching and learning. However, integrating AI tools requires careful consideration to ensure their use is ethical, equitable, effective, and secure. Here is a practical guide to help educators effectively vet AI resources:
1. Human-Centered Approach
- Prioritize AI tools that keep educators and students at the center, emphasizing the enhancement rather than the replacement of human interactions.
- Look for tools that support educators in personalized instruction, formative assessment, lesson planning, and administrative efficiency without diminishing human judgment or responsibility.
2. Evidence-Based Effectiveness
- Select AI tools that are supported by independent research demonstrating their educational value. Avoid adopting technology purely on promotional claims or unverified outcomes.
- Evaluate the AI tool’s track record in similar educational settings and its proven effectiveness in enhancing student engagement, personalization, and learning outcomes.
3. Ethical and Transparent Practices
- Ensure AI tools have transparent practices around data collection, storage, and usage. Prioritize products that adhere to established guidelines, like the OECD Recommendation on Artificial Intelligence.
- Be mindful of algorithmic biases that could negatively impact marginalized groups. Demand clear information from vendors on how biases are identified and mitigated.
4. Accessibility and Equity
- Choose AI tools that offer equitable access, catering to diverse learning needs, abilities, and backgrounds, including students with disabilities and emergent multilingual learners.
- Scrutinize the accessibility features of tools to avoid “techno-ableism”—assuming technology alone can address all learning barriers.
5. Professional Development and Support
- Confirm that comprehensive training and ongoing professional support for educators accompany AI tools.
- Opt for resources that enhance educator AI literacy and digital citizenship, empowering educators to confidently implement and manage AI tools.
6. Privacy, Security, and Accountability
- Prioritize AI tools that comply strictly with student privacy laws (e.g., FERPA in the United States) and have robust security measures to protect sensitive data.
- Clarify accountability mechanisms to ensure ethical AI deployment and responsiveness to any arising concerns or errors.
Thoughtful integration of AI can significantly enhance the educational experience. By adhering to these guidelines—prioritizing human-centered, evidence-based, ethical, equitable, supportive, and secure tools—educators can effectively harness AI's transformative potential for their classrooms.
4.2: Dos and Don'ts of AI in the Classroom
By integrating artificial intelligence (AI) in the classroom, educators can offer students powerful educational opportunities. Here’s a quick guide highlighting essential best practices (“Dos”) and common pitfalls to avoid (“Don’ts”), ensuring a safe, effective, and equitable AI-enhanced learning environment.
Do…
- Do Keep Humans Central: Always prioritize tools that complement human interactions and enhance educator-student relationships.
- Do Ensure Evidence-Based Use: Choose AI tools proven effective through independent research and aligned with educational objectives.
- Do Provide Professional Development: Offer thorough training for educators on AI literacy, enabling confident and informed use of AI tools.
- Do Maintain Transparency: Select tools with clear information on data practices, ensuring understanding of data collection, usage, and storage.
- Do Promote Accessibility: Opt for AI solutions accessible to diverse learners, including emergent multilingual students and individuals with disabilities.
- Do Adhere to Privacy Standards: Ensure strict compliance with privacy laws (e.g., FERPA) and robust data security practices.
- Do Evaluate for Bias: Regularly assess AI tools for algorithmic biases to safeguard against unfair outcomes.
- Do Set High Standards for Surveillance: Only adopt AI-powered surveillance tools with clear educational justification, community transparency, and safeguards to prevent misuse.
Don't…
- Don’t Rely Solely on AI: Avoid replacing essential human interactions or educator judgment entirely with technology.
- Don’t Ignore Equity: Refrain from adopting AI tools that disadvantage or fail to accommodate diverse student populations.
- Don’t Overlook Ethics: Never compromise ethical considerations, such as data privacy, student autonomy, or transparency, for convenience.
- Don’t Skip Training: Avoid implementing AI tools without providing adequate training and resources for educators.
- Don’t Trust Unverified Claims: Beware of AI tools lacking independent research backing or credible evaluations.
- Don’t Underestimate Risks: Avoid neglecting potential issues, such as biases, data security, and privacy violations.
- Don’t Forget Accountability: Never implement AI without clear accountability measures to address any errors, biases, or negative outcomes swiftly.
- Don’t Use Surveillance Tools for High-Stakes Decisions: Do not use AI monitoring tools to make disciplinary, evaluative, or grading decisions. These tools should not substitute for due process or human oversight.
Successfully integrating AI into the classroom hinges on mindful, ethical, and informed practices. By embracing these dos and steering clear of the don'ts, educators can set their classroom up for success in an increasingly AI-driven world.
4.3: Questions to Ask
Are you thinking about the use of AI in your role and/or in your district? Here are some possible questions that students, educators, and families can ask as schools and districts develop local policies and review, adopt, and reevaluate tools.
Questions for School Leaders
- How will AI be incorporated into the classroom in a way that promotes critical thinking?
- How is [school name] ensuring that all AI-generated content is subject to human verification?
- Will AI be used to assign grades in any way? What are the policies around human verification of grades? Is there a process through which students may dispute a grade generated by AI?
- Are there any processes—such as subjective grading, IEP writing, or hiring decisions—in which the use of AI will be prohibited?
- Will AI detectors be used in any way to determine the presence of academic dishonesty? What training will teachers receive around the functionality and limitations of these tools? What options will students have to dispute false positive results?
- How will digital literacy instruction for students evolve?
- How are teachers being supported to vary assignment types, encourage original thought, and discourage reliance on AI-generated content?
- What is the school’s policy on using AI-enabled surveillance tools? What safeguards are in place to ensure that these tools are not used for high-stakes decisions, like student discipline or educator evaluations?
Questions for School Board Members
- What are the short- and long-term priorities of the school board when it comes to integrating AI safely and equitably into schools?
- What policies is the school board considering to ensure that the benefits of AI tools don’t accrue to the most advantaged students, knowing that suburban, majority-white, low-poverty districts are most likely to provide AI training for educators?
- How will the school board ensure that educators are adequately trained to implement AI tools in their educational environments in a safe and equitable way?
- How will the school board ensure that student privacy and data protection remain a priority? Who is responsible for developing policies that guide data stewardship?
- How will the school board ensure that responsible use guidelines align with best practices for use of AI in K–12 schools?
- How will the school board ensure that students' civil rights are protected when AI is used in education settings?
- What policies are in place to govern the use of AI-powered surveillance tools in schools? How will the school board ensure that these tools are only used when necessary and with clear prohibitions on using surveillance data for educator evaluations or student discipline?
Questions for District Leaders
- What training, guidance, or support will the district provide or recommend for educators? Is there a plan for continued professional development as the field develops?
- What guidance are you offering schools to evaluate the data privacy and security measures of education products they may purchase or contract from external vendors?
- Who provides input into what AI systems [LEA Name] adopts? How can educational technology personnel, school leaders, educators, and families make their voices heard in that process?
- How will parents and families be notified about the collection, processing, or utilization of student data by AI systems?
- Who oversees data privacy and security at the district level? Has the district created a chief privacy officer position?
- How will educational technology providers be held accountable for ensuring that AI tools are accessible to all students and staff?
- What is the district’s plan for reviewing and reevaluating AI tools and best practices as the field develops? Who leads this effort?
- What district policies exist regarding AI-driven surveillance? What protections are in place to prevent these tools from influencing hiring, promotion, or student discipline decisions?
Questions for Union Leaders
- How will union representatives advocate to ensure that AI tools do not increase educator workload to allow their focus to remain on students?
- What are the union’s plans for negotiating clauses in collective bargaining agreements that address the use of AI in the classroom?
- How will the union ensure that AI tools are used equitably in a way that supports learning and differentiation for diverse learners?
- What steps are the union taking to make sure that AI tools are used to support, not supplant, human-directed teaching in the classroom?
- How is the union addressing the use of AI-powered surveillance technologies? What steps are the union taking to ensure prohibition of the use of such tools in educator evaluations or promotion decisions?
4.4: AI for Students with Disabilities & Procurement
Artificial Intelligence and Accessibility
AI tools can support learning but only if they are accessible to all students. Many digital platforms still contain barriers that exclude students with disabilities. By applying the Web Content Accessibility Guidelines (WCAG) and Universal Design for Learning (UDL), educators can better evaluate and select AI tools that aim to be inclusive, equitable, and effective for every learner.
Web Content Accessibility Guidelines (WCAG)
The Web Content Accessibility Guidelines (WCAG) are a set of rules that help make websites and digital tools easier for everyone to see, hear, understand, and use—especially people with disabilities.
Perceivable
Every learner should be able to use at least one of their senses—like seeing, hearing, or touching—to access the content in a way that works for them.
- Can all visual content (images, graphs, videos, etc.) be understood without seeing it?
- Can a screen reader access and interpret the tool’s content accurately?
- Is audio or video content accessible to users who are deaf or hard of hearing?
- Does the tool rely on color, sound, or animation alone to convey meaning?
Operable
Every learner must be able to navigate and interact with the tool using various input methods.
- Can all features be accessed and used with just a keyboard (not a mouse)?
- Does the tool provide clear and consistent navigation mechanisms?
- Does the tool give users sufficient time to read and respond?
- Are interactive elements labeled clearly?
- Does the tool support error prevention and recovery?
Understandable
The tool should look and work the same way throughout, be easy to follow, and use language that makes sense for your audience. This helps every learner understand and use it more easily.
- Is the language of the content and interface clearly defined and appropriate for the intended users?
- Are instructions, labels, and error messages clear and unambiguous?
- Does the tool operate in a predictable way?
- Are users notified before any context changes automatically?
Robust
The tool should work well with different tools and devices—like screen readers, browsers, or phones—so every learner can use whatever technology works best for them.
- Is the tool compatible with a wide range of assistive technologies?
- Does the tool function properly across different devices and browsers?
- Does the tool continue to be accessible after updates or changes?
- Has the tool been tested or reviewed for accessibility by people with disabilities or experts?
Universal Design for Learning
Universal Design for Learning is a framework that helps educators create lessons and tools that work for all students by offering different ways to learn, show what they know, and stay engaged.
Multiple Means of Engagement
- The tool allows for personalization based on student interests, strengths, or learning needs.
- The tool offers meaningful choices in how students engage with tasks and content.
- Feedback is timely, constructive, and supports student persistence.
- Students can set goals, monitor their progress, and reflect on their learning within the tool.
- The tool minimizes distractions and supports focus through a calming, clutter-free interface.
Multiple Means of Representation
- Content is available in varied formats, including text, audio, visuals, and/or video.
- The appearance of content (e.g., font size, color contrast, layout) can be adjusted for readability.
- Language supports are built in, such as translation, simplified text, or multilingual options.
- Vocabulary, symbols, and complex concepts are clarified through built-in scaffolds or supports.
- The tool is compatible with screen readers and follows accessible layout and structure practices.
Multiple Means of Action and Expression
- Students can interact with the tool using different input methods (e.g., typing, voice, touch, switch access).
- The tool accepts varied forms of student work, such as writing, speaking, drawing, or multimedia responses.
- Supports like spell-check, speech-to-text, or graphic organizers are built into the tool.
- The tool works with assistive technologies already used by students (e.g., screen readers, AAC devices).
- Students can control the pace of their learning and revisit instructions or tasks, as needed.
Tools that prioritize equity are designed with diverse learners in mind, protect student data, and provide the necessary support for educators to use them effectively and ethically. Use the criteria below to evaluate whether the tool centers inclusion, accessibility, and responsible implementation:
- The tool has been developed or reviewed with input from students with disabilities and diverse backgrounds.
- AI-generated content is inclusive, culturally responsive, and free from harmful bias.
- The tool complies with privacy laws (FERPA, COPPA) and protects student data.
- Accessibility documentation (e.g., VPAT, WCAG conformance) is available and transparent.
- Training, onboarding, and ongoing support are available for educators and students.
- The tool is financially accessible and offers equitable implementation options for schools.
Artificial Intelligence Tool Decision Tree
Will this AI tool help all my students learn, connect, and succeed?
This decision tree is designed to help K–12 and higher education educators evaluate whether an AI tool shows key indicators of accessibility and universal design. It supports consideration for a wide range of learners, including students with disabilities, multilingual learners, and other marginalized populations. The tool is grounded in the POUR principles of accessibility (Perceivable, Operable, Understandable, Robust) and Universal Design for Learning (UDL) standards. Educators should also consider how well the tool aligns with the Web Content Accessibility Guidelines (WCAG) to ensure digital equity and inclusion in the learning environment.
START: Can Students Access It? (Perceivable)
Make sure every learner can see, hear, or understand the content.
- Can students use a screen reader or hear text read out loud?
- Are videos captioned? Are images explained (with alt text or voice)?
- Can students change font size, color, or background?
- Is the screen calm and not too busy or distracting?
- Are instructions clear and free of jargon?
If not: Add supports or pick another tool.
If yes: Can Students Use It? (Operable)
Make sure every student can navigate and control the tool.
- Can students use it without a mouse (with just a keyboard or switch)?
- Can students use voice, touch, or typing to interact?
- Can students pause or go back, if needed?
- Are buttons and links easy to find and labeled clearly?
If not: Adjust settings or offer an alternate version.
If yes: Can Students Understand It? (Understandable)
Make sure it’s easy to follow and helps students stay focused.
- Is the language age-appropriate?
- Are directions simple and easy to find?
- Is feedback kind, helpful, and specific?
- Can students set goals, track progress, or get encouragement?
If not: Rethink the tool.
If yes: Will It Work for Everyone, Every Time? (Robust)
Make sure the tool is safe, fair, and works on different devices.
- Has it been tested with students who have disabilities?
- Does it work on phones, tablets, and laptops?
- Does it protect student data (like FERPA)?
- Is it free or affordable for your school?
- Are there educator guides or tutorials available?
If not: Ask for help or find a more inclusive tool.
15–18 “Yes” answers = Strong Accessibility
The tool demonstrates strong indicators of universal design and may be a good starting point for supporting diverse learners, including students with disabilities. Don’t forget to work with your school- or district-based technology representative to consider the WCAG2.2 standards.
10–14 “Yes” answers = Moderate Fit
The tool shows good potential but may need targeted adjustments within the classroom to better serve all students. Consider how it could be improved to reduce barriers and support student variability.
0–9 “Yes” answers = Limited Fit
The tool may present barriers for many students. Reconsider use or explore ways to significantly adapt the tool.
4.5: AI for Multilingual Learners
Using AI Tools to Enhance Language Learning for Emergent Multilingual Learners
Today, the United States has 5.3 million multilingual learner students (accounting for more than 10 percent of the school-aged population); however, 55 percent of educators have at least one multilingual learner (ML) in their classroom. This makes it critical for all educators to become familiar with artificial intelligence (AI) tools and the role they play in supporting ML students who are concurrently learning the English language and core academic content to achieve academic success.
AI can generate personalized learning materials, such as worksheets, quizzes, and reading assignments, aligned with students' learning goals, strengths, and interests. AI analytics can provide educators with insights into student progress and areas that require additional support, allowing educators to provide targeted interventions during small-group instruction. AI-powered tools can enable text-to-speech and speech-to-text capabilities, ensuring that all students, including multilingual learners and those with visual or hearing impairments, can access educational content. AI-driven language translation tools can break language barriers, making academic content accessible to multilingual learners and allowing full engagement in their learning process.
However, educators are still seeking guidance on how to harness AI's power to ensure equitable access, enhance digital literacy, and prepare all students to thrive in an AI-driven world while also balancing the growing AI technology actively permeating their personal and professional lives. Educators can meet these needs by engaging with existing professional development opportunities designed to empower them to manage increasingly complex learning environments, support them in meeting their diverse student needs, and develop the essential skills to integrate that technology to effectively and ethically assist them in fostering AI literacy in their classrooms.
Finally, it cannot be stressed enough that throughout this brief, AI is not a replacement for educators but a powerful ally in the pursuit of education excellence.
Getting Started: Using AI Technology in the Classroom
Many AI program developers recommend that educators interested in understanding and leveraging AI tools in their practices gradually become familiar with these technologies before introducing them into the classroom to understand their strengths and weaknesses. Once educators develop proficiency in using AI-assisted tools, such as Open AI ChatGPT, they can teach these tools little by little as part of a lesson plan.
In November 2022, OpenAI launched ChatGPT, a program ML educators have been using to develop English language development (ELD) lessons designed to help ML learners acquire proficiency in reading, writing, listening, and speaking English. These lessons are tailored to meet the needs of younger students learning English as a second language. Similarly, secondary school educators used this technology to develop comprehensible input (language acquisition occurs when learners are exposed to input slightly beyond their current proficiency level) and communicative competence (language used appropriately in various social contexts) while concurrently enhancing English language development opportunities for ML students. Both scenarios aim to support language acquisition in a structured, supportive, and engaging way.
Educators can use the AI tool Perplexity to support ML students as they learn new content. Perplexity, a free AI-powered search engine, can turn challenging text into easily digestible content and identify key details on a chosen topic. It searches the internet in real-time, identifies relevant sources, and generates high-quality answers. Perplexity prioritizes accuracy and reliable answers, aiming to serve as the definitive source for knowledge discovery. Unlike a Google search, Perplexity shelters the content by significantly reducing the time it takes to process all the various resources. This tool enables students to read a few paragraphs on the topic, further assisting MLs learn the content more efficiently by using their more proficient language.
The great thing about AI-assisted tools, like ChatGPT and Perplexity, is that you don’t need to understand how they work to use them effectively. AI-driven language translation tools can break language barriers, making educational content accessible to multilingual learners.
Unintended Consequences of Potential Overdependence on ChatGPT Among Emergent Multilingual Learners
While ChatGPT can be a helpful resource, educators who excessively rely on the AI tool without careful integration into learning processes may face various challenges. It is also imperative for educators to understand the potential impact that overdependency on these technologies can have on their students' overall language development.
Below are six key indicators of overdependency on ChatGPT to consider.
1. Diminished Language Acquisition Skills
When MLs rely too heavily on ChatGPT for translations or explanations, they may miss the crucial cognitive processes for acquiring grammar, vocabulary, and pronunciation. Language learning thrives on active engagement, practice, and the ability to make mistakes. In addition, MLs may become more passive in their language practice, relying on AI to produce responses or write work, which reduces the opportunities for productive language output (speaking and writing).
2. Cognitive Overload or Dependence
Suppose MLs use AI to answer questions or translate text without fully understanding the material. This may prevent them from developing the problem-solving and critical thinking skills necessary for mastering a new language. Continuous dependence on AI for every linguistic challenge may erode MLs’ self-confidence in communicating and solving problems independently.
3. Loss of Contextual and Cultural Understanding
AI language, while powerful, is deeply tied to culture and oversimplifies translation that does not always accurately capture nuanced cultural meanings, idiomatic expressions, or regional variations in language. This may lead to misunderstandings or the loss of crucial cultural context. This also means MLs may miss out on the wealthy, contextual nuances that come with real-life interactions and cultural immersion.
4. Lack of Personalized Support
AI may fail to address specific learning gaps or personal preferences that only a human educator can identify to meet their MLs’ unique challenges. AI might not constantly tailor responses appropriately to those needs. Furthermore, AI tools lack the emotional intelligence and social sensitivity required to fully support MLs’ psychological and emotional needs, which are often critical when acquiring a new language.
5. Impeded Peer Interaction and Collaboration
Language learning is often most effective when MLs engage in collaborative dialogue. Overdependence on ChatGPT may decrease opportunities for peer-to-peer communication and cooperative learning, which are vital for social language acquisition. It may also impede the opportunity for real-world interactions with native speakers, which is essential for honing their language skills.
6. Missed Opportunities for Educator-Learner Interaction
Overreliance on AI tools might mean MLs are less likely to seek feedback or clarification from educators, which can stymie growth and cause them to miss essential learning opportunities. Educators are critical in guiding students through language learning, offering support, and creating a nurturing environment. MLs relying too heavily on AI may miss the personalized, empathetic support that only human educators can provide.
Mitigating These Risks
To avoid these unintended consequences, educators should integrate ChatGPT as a complementary tool rather than a crutch. This can be done by:
- Encouraging balanced use of AI alongside human interactions and authentic language exposure;
- Promoting critical thinking by asking learners to evaluate AI-generated responses and use them as starting points for further exploration; and
- Fostering social, real-world language practice through peer collaboration, community involvement, and educator-student interactions.
In summary, while ChatGPT can be a valuable tool for enhancing language learning, overdependence can hinder key aspects of language development. AI must be used as part of a balanced and varied language learning approach that includes authentic, real-world practice and critical thinking skills. While ChatGPT can be a valuable resource, its role should always be to enhance—not replace—the core experiences that foster deep language learning. This equilibrium ensures that technological advancements enhance the learning process, supporting rather than overshadowing the vital educator-student relationship.
4.6: Partner Resources
TeachAI Resources
Sample Guidance on the Use of AI in Schools
As your school develops and customizes guidance on the use of AI, check out this sample toolkit from TeachAI to craft thoughtful policies and practices on responsible use. You can also use this resource to inform national and state/regional guidance.
Principles for AI in Education
TeachAI provides seven guiding principles that are essential considerations as schools develop AI guidelines. Each principle includes a description, questions to discuss and consider, and real-world examples.
Sample Letter to Parents and Guardians
TeachAI—in partnership with the National Parents Union—developed this sample letter on the use of AI that you can customize and share with parents and guardians. The letter is meant to engage families in the vision and recommendations of the use of AI in schools.
TeachAI drafted a sample student agreement on the responsible use of AI. By having your students sign such agreements, you can promote appropriate practices and establish clear guidelines.
ISTE Resources
As an educator in an increasingly technology-driven world, it is essential that you have a foundational understanding of AI. This guide includes definitions of AI and generative AI, guiding questions for educators, strategies for success, and examples of AI generative tools.
Setting Conditions for Success: Creating Effective Responsible Use Policies in Schools
You can use this resource to help craft responsible use policies that inform safe and healthy digital culture at school and at home. Responsible use policy templates for both elementary and secondary schools are included, which you can adapt and adjust to fit your school culture and goals.
5 Tips for Creating a District Responsible Use Policy
ISTE highlights five tips you should consider when creating a responsible use policy, emphasizing the positive aspects of learning with technology and keeping students safe online.
Additional AI Resources
What does it mean to be “AI Ready”? With the AI Readiness Framework from aiEDU, you can learn the basics of AI, including how to use it in an ethical way, critically evaluate its function, and responsibly leverage it to achieve your professional and personal endeavors.
Our partners at Common Sense “know that successful AI is built with responsibility, ethics, and inclusion by design.” Each AI Risk Assessment provides a snapshot of opportunities, considerations, and limitations of an AI product.
This guide from TechTonic Justice can help you figure out if AI is being used to make decisions about you and what you can do about it.
The AI Assessment Scale (AIAS), developed by Leon Furze and collaborators, is a framework to help you integrate generative AI into educational assessments. The AIAS offers a structured yet flexible approach to determine the appropriate level of AI involvement in student assessments to align with specific learning outcomes.
AI Policy Resources
5: AI Policy Resources
Take action with policy tools tailored for educators and education leaders. These resources include overviews of key regulations, model school board policies and resolutions, and templates for engaging with state leaders. You can also find a real-time tracker of AI-related education policy nationwide from an NEA partner.
5.1: Overview of Federal Regulations
Federal Regulations Related to Artificial Intelligence
The United States does not have a comprehensive law that covers data privacy; instead, there are federal and state laws that cover various types of data privacy, such as financial data or health information. As of 2024, only California and Virginia have enacted comprehensive state privacy laws.
In recent years, two major federal legislative proposals—the American Privacy Rights Act and the American Data Privacy and Protection Act—surfaced, both aiming in different ways to address data privacy, algorithm transparency, and other concerns in a comprehensive manner. While these proposals are not likely to pass any time soon, it is encouraging to see substantive, high-quality policy proposals circulating.
Related to the data privacy of students, there are currently two federal laws worth mentioning.
The Family Educational Rights and Privacy Act of 1974 (FERPA)
FERPA is the federal law that protects the privacy of student education records and applies to all schools and education agencies that receive funds under an applicable program of the U.S. Department of Education.
The last regulatory updates to FERPA predate the widespread use of technology in learning environments, including the storage of education records, the technological generation of records, and the use of technology to support and assess students. School districts and education institutions that are subject to FERPA must interpret this law for how data is accessed, used, and stored in light of artificial intelligence. For instance, using a program to detect AI usage may require students’ work to be processed through an outside third party, which may be a violation of FERPA.
In 2023, UC Santa Cruz issued guidance and warned that using services that purport to detect when AI is used in assignments should not be used without disclosure and consent required under FERPA unless certain preconditions were undertaken pertaining to the service having been purchased and vetted by the institution or that the tool is “protected from external access.”
Some of the key components of FERPA as it relates to schools include the following:
- Parents and guardians have rights to access, review, and request amendments to their child's education records until the student turns 18 or enters post-secondary education.
- Schools must obtain written consent before disclosing personally identifiable information (PII), with certain exceptions (e.g., health/safety emergencies, transfers, legal requirements).
- Education records include items like grades, disciplinary records, and transcripts, while directory information (e.g., name, grade level) can be shared unless parents and guardians opt out.
- Educators and schools must protect student data, including when using digital tools and educational apps, ensuring vendors comply with FERPA rules.
- Annual notifications to parents and guardians are required about their FERPA rights, with the ability to opt out of directory information sharing.
The Children’s Online Privacy Protection Act (COPPA)
COPPA sets specific requirements for operators of websites or online services that knowingly collect personal data from children under the age of 13. Primarily, it requires direct parental or guardian notification and parental or guardian consent for the collection of these children’s personal information and allows parents and guardians to control what happens to this data. It establishes that companies that collect this information must have clear policies for what information is collected and how it is secured.
COPPA aims to safeguard young students' personal information from being collected and used without parental or guardian consent, thereby enhancing their online privacy and safety.
Schools often use various online educational tools and platforms. In certain situations, schools can provide consent on behalf of parents and guardians for the collection of students' personal information, particularly when the data is used solely for educational purposes. This places a responsibility on educators and administrators to ensure that the digital tools they employ comply with COPPA regulations and adequately protect student data.
In January 2025, the Federal Trade Commission (FTC) finalized amendments to the COPPA rule to address evolving digital practices and enhance children’s online privacy protections. Key updates include:
- Separate Parental or Guardian Consent for Data Disclosure: Operators are now required to obtain distinct verifiable parental or guardian consent before disclosing a child's personal information to third parties. This change aims to give parents and guardians more control over their children's data, especially concerning targeted advertising practices.
- Data Retention and Deletion Policies: The amendments mandate that operators retain children's personal information only as long as necessary to fulfill the purpose for which it was collected. They must establish and disclose data retention policies and are prohibited from retaining such information indefinitely.
- Enhanced Oversight of Safe Harbor Programs: The FTC introduced stricter requirements for COPPA Safe Harbor programs, including more detailed reporting and public disclosure of member operators. This measure aims to increase transparency and accountability among organizations that self-regulate under COPPA guidelines.
These updates underscore the importance for educators and school administrators to stay informed about COPPA regulations. Ensuring that the educational technologies and online services used within schools comply with these enhanced privacy protections is crucial to safeguard student data effectively.
Higher Education
Unfortunately, there are few federal regulations that govern AI directly at the higher education level. Unlike at the Pre-K–12 level, where a local or state education agency or school district may issue guidance for AI, in higher education, institutions and systems may vary widely in terms of disparate policy. While there may be a few examples of university system offices issuing guidance, for the most part, AI guidance is issued by individual institutions or, in many cases, can vary even from department to department within one school.
At the higher education level, a greater degree of academic freedom and autonomy is given to faculty to decide what is taught and how. Attempts at restricting course content or methods could be seen as violating academic freedom. Since most students at higher education institutions are legally adults, there is also a greater degree of control and autonomy given to the student in terms of their own data.
With that said, there are some federal laws that would still apply at the higher education level, including the following:
- The Family Educational Rights and Privacy Act (FERPA) still applies at the higher education level. Students 18 years and older, are required to consent the disclosure of education records. AI tools that use education records and related personally identifiable information must have the consent of students under FERPA.
- Disclosure of medical “treatment records,” including mental health records, are governed under FERPA.
- Laws that prevent bias and discrimination, like Title IX and Title VI of the Civil Rights Act, should guide the creation, use, and applications of AI products to minimize bias and ensure that it does not lead to discrimination against protected classes.
Intellectual Property and Research
U.S. Copyright Law, enshrined in Title VII of the U.S. Code guides copyright and ownership of created works, including journal articles, books, and other covered intellectual property.
Recent guidance from the U.S. copyright office maintains that human-created works, like journal articles or books, are governed under Title VII and that “copyright protects the original expression in a work created by a human author.” It further stated, however, that copyright “does not extend to the purely AI-generated material.” The Copyright Office has indicated that guidance on the use of AI data-scraping copyrighted material will be forthcoming.
5.2: School Board Resolution on AI in Education
Sample Resolution on Artificial Intelligence Issues
WHEREAS, the rapid adoption and expansion of artificial intelligence (AI) technologies are permeating throughout several industries and trades; and
WHEREAS, K–12 and higher education are not immune from the impacts of AI; and
WHEREAS, the [DISTRICT] (hereinafter, “District) understands the potential benefits to teaching and learning that carefully implemented AI technologies can have on students and educators, including personalization of instruction, automation of routine tasks, tutoring, aiding collaboration and creativity, content creation and enhancement, and high-speed data analysis; and
WHEREAS, the District understands the risks and pitfalls associated with AI technologies, including infringement on intellectual property rights, digital equity and access concerns, contributions to misinformation, concerns around data privacy, racial and cultural biases, bullying, plagiarism, threats to jobs, and more; and
WHEREAS, the District believes that AI technologies should only be implemented with proper human oversight and the decision-making power primarily in the hands of educators and administrators; and
WHEREAS, the District believes that AI technologies should be leveraged to enhance and enrich instruction for students and should not be used to replace or limit employment of education professionals who work with students; and
WHEREAS, the equitable use and implementation of AI technologies in education must be regularly and fairly evaluated with educators at the decision-making table; and
WHEREAS, the District understands that educators will need ongoing and regular professional learning and development in the implementation of AI technologies to build AI literacy skills among students, educators, and families; and
WHEREAS, the District wants to mitigate the risks and pitfalls of AI technologies while maximizing the benefits toward optimal teaching and learning experiences for all district students and educators;
NOW, THEREFORE, BE IT RESOLVED that, on [DATE] of [MONTH, YEAR], by the [SCHOOL DISTRICT GOVERNING BOARD] (hereinafter, “Board”), the District develops a comprehensive set of guidelines related to the research, procurement, piloting, and ongoing evaluation of AI technologies within [TIMELINE], including, but not limited to:
- The convening of an advisory committee, inclusive of educators, to determine AI technology needs and provide guidance and support across the District;
- A process for assessing the risks and pitfalls of AI technologies, including, but not limited to, infringement on intellectual property rights, inequitable technology access, contributing to misinformation, concerns around data privacy, racial and cultural biases, bullying, plagiarism, and threats to jobs;
- Protocols to hold vendors accountable for addressing the aforementioned risks and pitfalls of AI technologies;
- Requirements for extensive testing of AI technology pilots with diverse users to evaluate effectiveness and risks before district-wide implementation;
- Policies for transparent disclosure of what data will be collected and how it will be used to all students, parents, families, and staff;
- Ongoing professional development for staff on all AI technologies to be implemented, the equitable and ethical use of AI aligned with district values, and the promotion of AI literacy among students;
- A process for ongoing auditing, monitoring, and evaluation of AI benefits and impacts on students and staff;
- Channels for feedback and concerns about AI practices from students, parents, families, and staff; and
- A budget for costs of the use of and training toward AI systems and ensuring its responsible use;
BE IT FURTHER RESOLVED that development of AI technologies and guidelines continuously and regularly will engage a diverse set of interest holders, inclusive of students, families, educators, educator unions, and community members.
5.3: School Board Policy on AI
Sample Policy on Artificial Intelligence Issues
Purpose
The [DISTRICT] (hereinafter, “District”) is committed to providing students with the most innovative and effective educational experiences to foster high levels of learning and opportunities for self-expression. As schools prepare students for a future that demands adaptability, critical thinking, and increased digital literacy, the District recognizes the potential of artificial intelligence (AI) and other related technologies.
This policy establishes guidelines for the ethical, equitable, and effective integration of artificial intelligence technologies in district schools to enhance both administrative functions and teaching and learning for all students and educators.
Guiding Principles
The use of AI in the District will ensure:
- Students and educators remain at the center of education;
- Evidence-based AI technology enhances the educational experience;
- The ethical development and use of AI technology and strong data protection practices;
- Equitable access to and use of AI tools; and
- Ongoing education with and about AI for all students and educators.
Definitions
ALGORITHMIC BIAS: “Systematic, unwanted unfairness in how a computer detects patterns or automates decisions,” often based on characteristics and identities such as age, class, culture, disability status, ethnicity, gender, location, nationality, political affiliation, race, religious background and practices, and/or sexuality.
ARTIFICIAL INTELLIGENCE (AI): Machine-based systems designed around human-defined objectives to perform tasks that would otherwise require human or animal intelligence.
AI LITERACY: Understanding what it means to learn with and about AI while gaining specific knowledge about how artificial intelligence works, the skills necessary to master AI tools, and how to critically navigate the benefits and risks of this technology.
DATA GOVERNANCE: A set of practices that ensures data assets are formally managed throughout a system/enterprise and that defines the roles, responsibilities, and processes for ensuring accountability for and ownership of data assets.
EDUCATORS: People employed by an institution dedicated to pre-K–12 or higher education.
GENERATIVE AI: Artificial intelligence tools that generate text, images, videos, or other content based on existing data patterns and structures.
TRANSPARENCY: Open disclosure of how AI systems work, including how they reach decisions and the data used to do so.
Equitable Access
The District shall ensure all students and staff have equitable access to AI tools, irrespective of gender, ethnicity, disability status, socioeconomic status, geographic location, or displacement status. The District shall provide assistive AI technologies to support diverse learning needs, including accommodations for students with disabilities according to the Individuals with Disabilities Education Act (IDEA).
Algorithmic Bias and Fairness
AI tools and systems utilized in the District shall undergo regular audits to identify and mitigate biases. Oversight committees, inclusive of educators, shall be established to review AI implementation efforts for unintended biases and to ensure alignment with district equity goals.
Student and Educator Data Privacy
The District shall adhere to all federal and state laws regarding student and staff data privacy. Only vendors with thorough data protection practices should be used for any purchases within the District. The District must inform educators, parents, and students of what and how AI tools are used in schools. Data collected through AI shall be subject to protocols providing transparency about the types of data collected and how the data is stored, utilized, and protected.
Vendor and Tool Selection
The District shall require all vendors of AI tools and resources to meet district standards for transparency, equity, and ethical decision-making. AI tools and resources shall only be adopted once there is data supporting a tool’s appropriateness and efficacy with potential users and, for instruction-focused AI, its alignment with high-quality teaching and learning standards and practices. If research is unavailable, AI tools shall be adopted on a pilot or trial basis if the evidence is being collected and analyzed in a timely manner, with an agreement in place to cease the use of the technology if the results of the research do not show the intended benefits. AI tools and resources that are made in collaboration with educators should be prioritized.
Professional Learning Opportunities
The District shall provide to educators high-quality, multifaceted, ongoing professional learning opportunities that help increase their AI literacy and understand what, how, and why specific AI is being used in their educational settings. Learning opportunities must be provided to educators in all positions and at all career stages. Special attention should be paid to how to use AI appropriately for all learners, including early learners, students with disabilities, and emergent multilingual learners. Learning opportunities shall assist educators in researching and assessing available evidence about effective AI uses in education; understand AI bias and know strategies for reporting and mitigating the harmful impacts of AI bias; and understand the ethical and data privacy hazards associated with AI.
AI Literacy and Curriculum Integration
The District shall take steps to ensure all students and educators become fully AI literate and develop a sense of agency with these technologies. Curricular changes should be made to incorporate AI literacy across all subject areas and education levels so that all students understand the benefits, risks, and effective uses of these tools.
Continuous Improvement
The District shall establish an AI oversight committee to monitor AI use, address interest holder concerns, and recommend improvements. The District shall conduct, at minimum, an annual evaluation of AI tools and practices to ensure they meet district goals. The District shall regularly engage educators, students, families, and local associations through workshops, surveys, and forums to gather input on AI use in schools.
5.4: Sample Letter to State DOE
Sample Letter to State Department of Education for Chief Privacy Officer
[Your Name]
[Your Address]
[City, State, ZIP Code]
[Your Email]
[Your Phone Number]
[Date]
[Recipient's Name]
[Title]
[State Department of Education]
[Department Address]
[City, State, ZIP Code]
Subject: Request for the Appointment of a Chief Privacy Officer to Protect Student and Educator Data Privacy
Dear [Recipient's Name],
I am writing to formally request that the [State Department of Education] appoint a Chief Privacy Officer who is authorized and adequately resourced to oversee and protect student and educator data privacy across our state’s Pre-K–12 schools and institutes of higher education. As artificial intelligence (AI) and other digital tools become increasingly embedded in our classrooms, ensuring the security and ethical use of sensitive data is imperative.
Educational technology platforms collect vast amounts of personally identifiable information from students and educators, raising concerns about data security, third-party access, and potential misuse. Without strong leadership and oversight, these risks could undermine trust in digital learning environments and expose our schools to data breaches and noncompliance with federal and state privacy regulations, such as FERPA and COPPA.
A dedicated Chief Privacy Officer would play a critical role in:
- Establishing and enforcing statewide data privacy policies for all education institutions;
- Ensuring compliance with state and federal privacy laws;
- Providing guidance and training to educators, administrators, and policymakers on data security best practices;
- Investigating and responding to data breaches or privacy concerns in a timely manner; and
- Collaborating with stakeholders to develop responsible AI policies and ethical data practices.
By appointing a Chief Privacy Officer, [State] would demonstrate a strong commitment to protecting student and educator data while fostering a safe and responsible digital learning environment. I urge you to take action on this matter and prioritize data privacy within our schools.
Thank you for your time and consideration. I would appreciate the opportunity to discuss this issue further and support any initiatives that ensure stronger data protections for our schools. Please feel free to contact me at your convenience.
Sincerely,
[Your Name]
[Your Role/Title]
[School/District Name]
[Contact Information]
5.5: Partner Resource
EDSAFE AI Alliance shares a collection of policy-tracking tools across the education sector that track state, federal, and international policy. With easy access to these resources, you can explore the intricate nature of AI in education to ensure you are making decisions that center around equity, safety, and ethical considerations.
Additional Topics in AI and Education
6: Additional Topics in AI and Education
Explore a growing collection of insights on emerging and intersecting issues in AI and education. These resources highlight key considerations shaping the responsible use of AI in learning environments.
6.1: Student and Educator Data Privacy
As artificial intelligence (AI) becomes more integrated into education, ensuring data privacy and security for students and educators is essential. AI-powered tools collect vast amounts of sensitive information, including highly sensitive data, such as student health records, Social Security numbers, and families’ credit card data.
Without safeguards, these data can be vulnerable to breaches, misuse, or unethical surveillance. Schools and districts must implement strict data governance policies that comply with federal regulations. Educators should also teach students about digital literacy, emphasizing responsible data sharing and cybersecurity best practices. By protecting data privacy, educators can foster a safe learning environment, build trust in AI technologies, and ensure that AI enhances student learning without compromising individual rights.
Given that AI cannot operate without data—and often, very large amounts of highly sensitive data—the growing prevalence of these tools further exposes education institutions to data privacy and security threats. Education institutions hold unique datasets that include highly sensitive data on both students and their families, making them vulnerable to cybercriminals. Higher education institutions also are more likely than entities in other sectors to pay a ransom.
Due to the vast amount of data available and the lack of coordination among federal agencies and the education community, the education sector has become a target for cybercriminals. One cybersecurity firm estimates that the minimum number of U.S. pre-K–12 districts that were impacted by ransomware more than doubled from 45 in 2022 to 108 in 2023. Among the 108 districts, 77 had data stolen, affecting 1,899 schools. Threats against higher education institutions also jumped, from 44 in 2022 to 72 in 2023, with 60 having data stolen. Combining the pre-K–12 and higher education data, the education sector outpaces both health care and government in terms of data security threats. A similar survey conducted worldwide found that an astounding 80 percent of pre-K–12 providers and 79 percent of higher education institutions experienced ransomware attacks, costing millions of dollars in recovery costs.
Transparency is instrumental in protecting students and educators from data harms. To ensure transparency, educators at all levels must be involved in the decision-making process regarding AI vetting, adoption, and deployment. Additionally, school districts and postsecondary institutions should inform students, educators, and families about which AI technologies are implemented, the intended benefits of those tools, the data they require, and the protocols in place to collect, store, and utilize those data. In states with collective bargaining rights, educator contracts should include provisions for data privacy and security.
Considerations for Educators
As AI tools become more common in schools, educators need to consider how these tools are used to protect students’ and their own privacy and promote responsible learning environments. You can reduce the risk of data privacy issues and implement AI in a way that aligns with educational goals and values by:
- Understanding student privacy laws (i.e. FERPA, COPPA) and reviewing what data AI tools collect, how it’s stored, and who has access;
- Informing students, parents/guardians, and administrators about AI tools being used;
- Implementing secure access controls, such as strong passwords and two-factor authentication;
- Being aware of potential biases in AI algorithms that could impact student outcomes;
- Educating students on the importance of data privacy; and
- Ensuring AI supports, rather than replaces, critical teaching and decision-making processes.
Data Breach Response Checklist for Educators
In the event of a data breach or misuse of information at school, it is critical for educators to respond quickly to protect student and educator privacy. The following checklist provides essential steps to guide you through this process:
1: Report Immediately
- Notify school leadership, IT, or data protection officer.
- Follow your school or district’s breach policy.
2: Contain the Breach
- Disconnect affected devices.
- Prevent further access to compromised systems or data.
3: Preserve Evidence
- Do not delete or modify anything.
- Record what happened and when.
4: Document the Incident
- What was exposed?
- How was it discovered?
- Who was informed?
- What actions were taken?
5: Support the Investigation
- Cooperate with IT and administration.
- Provide full, honest information.
6: Communicate Carefully
- Only share details if authorized.
- Help notify affected individuals.
7: Learn and Improve
- Participate in post-incident review.
- Suggest or take part in additional training.
8: Follow Laws and Policies
- Know applicable data laws, such as FERPA.
- Stick to school policies and legal obligations.
6.2: Environmental Impact of AI
One of the major takeaways from the U.S. Global Change Research Program’s Fifth National Climate Assessment from fall 2023 is that the United States is warming faster than the rest of the world due to human activity. Negative impacts of climate change have undue and unequal consequences on Native People and People of Color, under-resourced urban and rural communities, people with disabilities, and girls and women. It is important that decision-makers and policymakers acknowledge, consider, and confront the environmental impacts of artificial intelligence and cloud technology.
“In the race to produce faster and more-accurate AI models, environmental sustainability is often regarded as a second-class citizen,” noted University of Florence Assistant Professor Roberto Verdecchia.
Although these technologies operate in virtual spaces, AI and the cloud will intensify greenhouse gas emissions, consume increasing amounts of energy, and require larger quantities of natural resources.
Research suggests…
- A single generative AI text query consumes energy at four or five times the magnitude of a typical search engine request.
- Generating a single image using AI consumes the same amount of energy as charging a phone to full power.
- Training one large AI model consumes nearly five times the lifetime emissions of the average American car.
- Data centers—giant warehouses filled with endless rows of computer servers that are continuously working to complete tasks—used 4 percent of total U.S. electricity in 2023, and that number is expected to jump 7–12 percent in the next three years alone.
While it is nearly impossible for researchers to evaluate the full extent of the negative environmental impacts of AI technologies, decision-makers in education settings should be mindful of their environmental impacts throughout the planning and implementation phases.
Considerations for Educators
As you are making decisions about the use of AI technologies in your classroom, it is important to consider the environmental impact of AI’s use:
- Frequent AI-powered tasks—such as automated grading, image generation, adaptive learning, and chat bot interactions—consume considerable energy and contribute to the need for more data centers in communities.
- You should be aware of the carbon footprint associated with AI tools and advocate for sustainable options.
- Where possible, schools and universities should adopt policies that prioritize energy-efficient AI models and cloud technologies powered by renewable energy.
- You should teach your students about the environmental impact of AI as part of learning around digital literacy.
- When you discuss AI ethics with students, you should include sustainability and responsible AI usage.
- When you create assignments and projects, you should encourage your students to explore energy-efficient AI alternatives, when possible.
- AI should complement, not replace, traditional teaching methods. Hybrid approaches that combine AI-driven personalization with resource-efficient teaching will help mitigate environmental costs.
This section was created in spring 2025. It will be updated as further research on this topic becomes available.