Artificial intelligence (AI) has gradually become accepted by colleges and universities as an effective tool for automating a number of tasks effectively and efficiently. Chatbots can answer students’ questions about class scheduling or check in with them about their mental health. AI-generated emails can remind students about important deadlines, prompt them to register for classes, turn in assignments and pay their fees on time. And, in a particularly controversial use, AI-based software is increasingly able to detect plagiarized assignments.
One professor at Georgia Tech, even used AI to build a virtual teaching assistant, called Jill Watson. Turns out that “Jill” receives very positive student evaluations.
Higher education is advancing from its initial forays into digital transformation that involved automating daily tasks, digitizing workflows, developing more complex datasets, and creating dashboards to improve their analytics. Now, institutions are not simply using technology to do the same things better. They’re deploying AI to do better things.
College leaders have learned that AI can do more than merely churn out routine prompts and generate helpful tips. They’re starting to use the technology to address some of their largest and most persistent challenges – including such bottom-line issues as increasing enrollment, improving student retention, and allocating financial aid.
And as AI expands into these core university practices, new concerns are also being raised about the tool’s threats to personal privacy and its vulnerability to systematic bias.
According to Arijit Sengupta, founder of Aible, a San Francisco-based AI company, colleges and universities are starting to catch up with other industries like banking and healthcare in using AI to impact key performance indicators.
MORE FOR YOU
Sengupta told me in a recent interview that he now has somewhere between 5-10 higher education clients who are using AI to help them make progress on key outcomes like increasing their yield from applicants, preventing first-to-second year attrition, targeting institutional financial aid and optimizing the solicitation of alumni donors.
Sengupta knows that university leaders have often been disappointed with the results of previous AI projects, and he agrees that in many cases AI is a waste of time and money because it’s not built to achieve tangible goals and specific outcomes that are most important to the institution.
With that in mind, Sengupta offers his clients the following guarantee – if Aible’s AI models and prescribed interventions don’t produce value in the first 30 days, the client won’t be charged. He told me that many college officials believe they need to understand AI logarithms and models before they can apply them, but according to Sengupta, they have it all wrong. “Our approach is to teach AI to ‘speak human,’ rather than the other way around.”
Once an AI model sorts through the complexity of a large amount of data and detects previously hidden patterns, the focus needs to become “what do we do about it – in other words, whom do we need to target, with what intervention, and when.” That’s where colleges tend to get bogged down says Sengupta; “their computer experts search for the perfect algorithm, rather than focusing on how to best change their practices to take advantage of what machine learning has provided them in the way of predictions and recommendations.”
As an example, one private, mid-sized university wanted to increase the percentage of applicants who eventually matriculated at the university. It was spending thousands of dollars to purchase student prospect lists, and devoting hundreds of hours calling the students on those lists. But the end result was disappointing – fewer than 10% of the applicants ever officially enrolled in the university.
Instead of “carpet bombing” all the names on the list, Aible was able to generate a model that guided the university toward much more precise targeting of students. It identified a subset of applicants who – based on their demographic characteristics, income levels and family history of attending college – were most likely to respond to well-timed phone calls from the faculty. It also identified the amount of financial aid that it would take to influence their enrollment decision.
It then advised the university to make personal calls to those students along with the tailored financial aid offers. The time this intervention took – from identifying and collecting the relevant data, developing the algorithm and recommending the intervention strategy – took about three weeks. The preliminary results indicate that the university will likely see about a 15% increase in its enrollment yield.
When Nova Southeastern University in Ft. Lauderdale, Florida, wanted to use its data to improve undergraduate retention, it used an Aible solution to identify the students who were most likely to leave. This helped the university’s center for academic and student achievement target and prioritize its retention efforts for the most at-risk students.
While most retention efforts are reactionary – activated only after finding a warning sign that a student is in academic peril – an effective AI strategy should help a college target curricular changes, intensify its advising and offer support services much earlier, before the time when a student begins to experience troubles.
One thing I discovered in researching this article is that colleges are often reluctant to acknowledge they’re using AI for purposes like these, insisting on remaining anonymous in press accounts. That concern did not surprise Sengupta who believes it’s due to the belief that using AI increases the risk that a person’s privacy will be violated.
One way to make sure individual privacy is protected is to maintain all the data on the university’s rather than on a vendor’s servers. Another is to not use information on groups smaller than 25 so that individual information cannot be inferred.
Hernan Londono, Senior Strategist for Higher Education at Dell Technologies, believes that privacy worries are not the only reason universities might hesitate to employ AI and skittish about admitting it when they do. “AI-based interventions may be biased because various populations of students may be differentially excluded from the data,” he told me.
Not only does AI reflect human biases, it can amplify them by feeding nonrepresentaive data to the algorithms, which then are used to drive important decisions. As one example, Amazon stopped using a hiring algorithm after discovering that it favored applicants based on words like “executed” or “captured” that were more common on men’s versus women’s resumes.
As significant as concerns about privacy and bias may be, colleges are inevitably going to increase their reliance on AI. It’s too powerful a tool to just sit on the higher education shelf. Its applications will continue to grow, and with the proper controls and precautions, it can be used to improve college performance and promote student success at the same time.