Hello everyone, hope you are all well on this cold Winter's day! I wanted to talk about something that has been brought up in conversation and is a concern not only for IT professionals working in Education but a concern for all in Education: Artificial Intelligence. Oftentimes when we put these two things together, you get a mixed reaction of both favour and disagreement. However, I think that it is important to discuss this in further detail and explain my opinions and views on the whole matter. In this post, I'll be talking about my views on AI in the Education sector, and how AI such as Copilot can be used responsibly to promote education and trust amongst instructors and students.
So what is Responsible AI exactly? Well, the meaning is in the title itself. It is the concept of using Artificial Intelligence responsibly and in a fashion that protects business concerns, employees, and the organization as a whole. In Education however, this definition changes slightly- not only do we want AI to respond ethically, but also by ensuring its application enhances the learning experience, promotes inclusivity, and safeguards student data as well as academic integrity. It's about fostering trust in these powerful tools and ensuring they are used to empower both educators and learners.
I found that instructors (being one myself privately) fall into two categories: For AI and Against AI. This can be further broken down into subcategories of different reasons, which I will highlight below:
- For Artificial Intelligence
- Improve instructor workflows and remediate menial tasks: AI can automate repetitive tasks like grading multiple-choice assignments, generating initial drafts of lesson plans, or even scheduling meetings. This frees up instructors' time to focus on more important aspects of teaching, such as personalized student interaction and curriculum development.
- Provide additional insights and perspectives that may not be seen by the instructor: AI can analyze student data to identify patterns and trends that might not be immediately obvious to an instructor. This can help identify students who are struggling, pinpoint areas where the curriculum could be improved, or even suggest personalized learning paths for individual students. It can also offer alternative viewpoints on complex topics, enriching classroom discussions.
- Allow for the enrichment of educational content: AI can be used to create more engaging and interactive learning experiences. For example, AI-powered tools can generate personalized quizzes, create interactive simulations, or even translate educational materials into different languages.
- Promote responsible use of AI amongst students and faculty: By integrating AI tools into the classroom in a thoughtful way, educators can teach students about the ethical implications of AI and how to use these powerful tools responsibly. This is crucial for preparing students for a future where AI will play an increasingly important role.
- Against Artificial Intelligence
- AI can introduce bias that can affect the independence of research: As mentioned before, AI models are trained on data, and if that data is biased, the AI will perpetuate those biases. This can have serious consequences in education, particularly in research settings where biased AI could lead to skewed results and unfair conclusions. This also applies to grading and assessment if not carefully managed.
- It provides another avenue of academic dishonesty: The ease with which AI can generate text and code raises concerns about plagiarism and cheating. Students might be tempted to submit AI-generated work as their own, undermining the learning process. This necessitates a shift in assessment strategies, focusing more on critical thinking and problem-solving skills rather than rote memorization.
- Concerns about data privacy and security: The use of AI in education often involves collecting and storing student data. This raises concerns about data privacy and security, particularly given the sensitive nature of student information. Institutions need to have robust data governance policies in place to ensure that student data is protected.
- Over-reliance on AI can hinder the development of critical thinking skills: If students become overly reliant on AI for answers and solutions, they may not develop the critical thinking and problem-solving skills they need to succeed in the real world. It's crucial to use AI as a tool to enhance, not replace, these essential skills.
Education
Education is the best defense when it comes to enforcing Responsible AI. The pushback from educators often stems from a lack of understanding and the many unanswered questions surrounding AI. Addressing these unknowns through comprehensive training and education is crucial for fostering acceptance and responsible use. Both instructors and students need to be informed and empowered to navigate the world of AI ethically and effectively.
Instructors, in particular, need a solid understanding of how Generative AI works. This includes:
- The underlying technology: A basic understanding of how large language models (LLMs) are trained and function can help demystify the technology and dispel some of the fear and uncertainty.
- Data sources and limitations: Instructors need to know where the AI retrieves its information from, its potential biases, and its limitations. This knowledge is essential for critically evaluating the output of AI tools and ensuring accuracy and fairness. Understanding that the AI isn't a source of truth, but rather a tool, is paramount.
- Ethical considerations: Instructors should be well-versed in the ethical implications of using AI in education, including issues of bias, plagiarism, data privacy, and the potential impact on student learning.
- Practical strategies for responsible integration: Training should cover practical strategies for integrating AI tools into the curriculum in a way that enhances learning without compromising academic integrity. This could include using AI for brainstorming, drafting, or providing feedback, but always with human oversight and critical evaluation. It also includes developing new assessment strategies that focus on higher-order thinking skills rather than simply regurgitating information, which AI can easily do.
- Detection methods (and their limitations): While not a primary focus, instructors should have a general idea of current AI detection methods and, importantly, understand their limitations. The focus should be on how students use AI, not just if they use it. This allows for a shift in the conversation from policing AI use to guiding responsible use.
Students also need education on:
- Understanding AI capabilities and limitations: Students need to understand what AI can and cannot do, and how to critically evaluate its output.
- Ethical use of AI: Students must be educated about the ethical implications of using AI, including plagiarism, bias, and the importance of academic integrity. This includes understanding proper attribution and citation when using AI-generated content.
- Developing critical thinking skills in the age of AI: Students need to learn how to use AI as a tool to enhance their learning, not as a shortcut to avoid critical thinking. This includes developing skills in fact-checking, source evaluation, and forming their own arguments.
- The future of work and AI: Discussions about how AI is changing the landscape of work and the skills needed to succeed in an AI-driven world are essential.
By empowering both instructors and students with the knowledge and skills they need to navigate the world of AI responsibly, we can create a learning environment that embraces the potential of AI while mitigating its risks. This educational approach is the most effective way to ensure that AI is used in a way that benefits everyone in the academic community.
How Microsoft 365 can help
IT professionals can use the Microsoft 365 suite to educate users on Responsible AI use. For instructors and staff, you can use SharePoint to create a site or page on Responsible AI use for instructors and staff. For students and remote workers, hosting webinars with Teams could be a good approach. If your organization utilizes the Viva suite, then Viva Engage would be a great way to send out resources and short-and-sweet information. Regardless of the medium, Microsoft 365 offers a variety of tools to educate users on using AI responsibly.
Governance
- DPSM (Data Security Posture Management) for AI: DPSM for AI in Microsoft Purview allows for the proactive monitoring and tracking of Generative AI usage in the organization DPSM for AI allows for administrators to:
- Understand AI usage: Gain visibility into how AI is being used within your organization. This includes identifying which AI tools are being used, what data is being processed, and who is interacting with these tools.
- Protect sensitive data: Prevent sensitive data from being used in AI prompts or included in AI-generated outputs. This can be achieved through techniques like data masking, tokenization, or redaction.
- Assess and mitigate risks: Identify potential risks associated with AI usage, such as data leakage, oversharing, or AI-specific threats. Once risks are identified, DSPM for AI provides tools to mitigate them, such as implementing access controls, data loss prevention policies, or AI model security measures.
Privacy and Compliance Tools: Microsoft provides robust tools to help institutions comply with privacy regulations such as GDPR. These tools help manage and protect personal data, providing detailed insights into data usage and access, and ensuring that institutions remain compliant with legal requirements.
Ethical AI Frameworks: Microsoft has developed frameworks and guidelines to promote the ethical use of AI. These resources help institutions develop their own AI policies, ensuring that AI applications align with ethical standards and best practices. This includes guidelines on transparency, accountability, and fairness.
AI Governance Policies: Establishing clear policies for the use of AI is crucial. Microsoft offers templates and best practices for developing AI governance policies that define the roles and responsibilities of all stakeholders, set standards for ethical AI use, and outline procedures for addressing AI-related issues.
Security
- Restrict SharePoint site access and OneDrive content access: restrict content to certain groups, allowing for those said groups to get access and Copilot to not. This will stop Copilot from interacting with this data if not desired.
- Apply Sensitivity Labels: sensitivity labels allow for content to be classified and policies to be enacted onto it. Using sensitivity labels can allow for administrators to prevent Copilot from accessing certain information. If you're not already using sensitivity labels already, now is a great time to start doing so.
- Stop data sprawl: Get rid of old data that is unused. Sprawl is a key contributor to data leakage, as the more loose data you have, the harder it is to control. Using Microsoft Purview Data Lifecycle Management can assist you with this.
Improvement
Continuous improvement is key to ensuring that AI tools remain effective and relevant in the educational context. Here are some strategies for fostering ongoing improvement:
- Regular Feedback: Gathering feedback from both instructors and students on Copilot. This feedback can provide valuable insights into what is working well and what needs improvement.
Monitoring and Evaluation: Continuously monitoring the performance of Copilot and evaluating it's impact on teaching and learning. This helps identify areas for improvement and ensures that Copilot is meeting their intended goals.
Professional Development: Providing ongoing professional development opportunities for instructors to learn about new Copilot features and best practices for their integration into the classroom. This helps educators stay up-to-date with the latest advancements and enhances their ability to use Copilot effectively.
Student Involvement: Involving students in the process of evaluating and improving Copilot and other AI tools. This can include student surveys, focus groups, and feedback sessions. By involving students, institutions can ensure that Copilot is meeting the needs and expectations of learners.
Collaboration with other IT units: Partnering with other IT units and AI experts to stay informed about the latest developments in Copilot and to gather insights into how other IT units are handling Copilot integration can be beneficial to the overall rollout.
By focusing on continuous improvement, educational institutions can ensure that AI tools are effectively supporting teaching and learning, and that they remain relevant and impactful over time.
Conclusion
Thank you for taking the time to read my thoughts on Responsible AI in education. I hope this post has provided you with valuable insights and practical strategies for integrating AI responsibly in your own educational contexts. Let's continue the conversation and work together to build a future where AI serves as a force for good in education.
Feel free to share your thoughts and experiences in the comments below. I'd love to hear from you!