Can AI lead universities to rethink how they operate to serve students?

Artificial intelligence is starting to play a pivotal role in higher education. Adaptive-learning systems in classrooms, predictive analytics to increase student retention – just the tip of the iceberg. In particular, a number of ethical, practical and philosophical questions arise as to how far higher education should in terms of implementing artificial intelligence and its new algorithms. Higher education right now is in the beginning stages of any kind of use of artificial intelligence, so we explored the vision of the promises and perils of AI in education.

Today, 95% of the data work in higher education is focused on accountability analytics and statisticians are using the data from former students to help the current students. But at the same time, predictive analytics goes on stage, meaning that real-time data is used to help current students choose better paths and make better choices. So basically it’s a process of personalisation of a pathway of a student and helps with big decisions. A lot of universities are using so-called algorithmic triggering based on demographic categories. Experts are making assumptions according to the demographic categories which are proxies for risk. This problem can be solved with a massive amount of data, which is becoming more and more available.

Every piece of data is actually a footprint that can be put together and right now students’ data is mostly used to help the institutions justify its existence. At the same time creating personalized and adaptive learning experiences for students when they’re learning is a particular subject area. When a student interacts with activities that are mediated by online environment, predictive model analyses those interactions and make an assumption on a trajectory for learning. It’s a revolutionary approach, pedagogical decision – making and part of the emerging science of learning.

In an academic context, all the models and algorithms should be transparent, peer-reviewable and challengeable by experts. Companies are trying to make transparent what the data science is saying. They have to use the most powerful predictors because of their relative score and high power in the model and final prediction. The educator needs to interact with data and make a relevant and clear decision of what’s happening with a student. Modeling these days is a commodity. It’s not a rocket science.

At some point student and professors can misuse the data. One of the assumptions is connected with computer decision and recommendation but the algorithm was written by a human being. Basically, humans make certain choices about what factors to include in their model, how to weigh those factors and what to use in the prediction of the score. People who are using the systems need to really understand what the system is telling them and how to use that.

At the same time, a lot of institutions are working under a lot of pressure for accountability of graduation rates. The question is, can experts take the same data and use them in a radically different way and a more effective way? The good news is that with large datasets and patterns available it’s more simple to give a student a real pathway. This process is at an early stage so future development of norms and ethics is inevitable.

It’s important that colleges and universities need to buy or develop software that involves artificial intelligence. To do this, ecosystem for data in universities is needed to be created. Then specialists and professors can work more effectively in the industry. One area, that can help take on challenges that universities often face, is the design development of open-education materials and allow students to have deeper conversations online.

The Penn State different faculties, as, for example, computer-science faculty, beginning to understand the implications and some of the complex legal issues that surround something like that. The question they are trying to answer is when a machine expresses new content, then who’s the author?

In near future, AI will get deeper into the actual support of teaching and learning processes. For example, Georgia Tech is using Jill Watson as an AI teaching assistant. As students interact with this virtual pedagogical agent, every interaction is a piece of evidence that then gets put through a model. The main goal is going to be figuring out which of those decisions will be given to humans to make, and which ones can be let to a system autonomously make. That interaction between what the machine can do well and what the person can do well is a bright area for learning.

But a lot of work is done for identifying accurate students to intervene and before they have challenges and the next step is to identify students who aren’t living to their potential. To do these researchers need to really understand how people and machines can work together and explore different problems or ideas.

Hundreds of students are using the course as open-education resources to ask questions. At the same time, professors are able to give them feedback on how well that question performed. It solves a problem of better statistics for learners and better writing of statistic questions.

Artificial intelligence should be used in a way that enables students, faculty and advisors. It presents an amazing opportunity for higher education. Higher education institutions have three core missions. They’re supposed to have a research for creating new knowledge, the teaching mission, and then the community-service mission. Could the data lead colleges and universities to rethink how they operate to serve students? That’s what technology and the AI models should be used for.

Author: AI For Education


If you like our articles, please subscribe to our monthly newsletter: