Artificial intelligence is paying an increased role in university and college admissions. Being a human derived construct, questions of ethics and bias need to be debated in terms of AI’s use in college admissions. This leads onto the consideration of other types of non-traditional assessments of prospective students.
Looking into these issues via Digital Journal interview is Dr. Kelly Dore, Co-founder, VP of Science & Innovation at Acuity Insights
Digital Journal: Broadly speaking, what use cases are there for AI in the college admissions process?
Dr. Kelly Dore: AI has several opportunities to be leveraged in the college admissions process, namely around creating efficiencies and transparency in the process. For example, we know that the application process is very overwhelming for many applicants as they try to get as much information as they can learn about different universities and their offerings. They’re also looking to understand their alignment and fit with the programs themselves. This process is easier for applicants who come from well-funded schools, or who have the resources to hire external support as they can source all this information through counselors and advisors. However, for those who come from under-resourced schools, this isn’t always the case. In this way, AI has an opportunity to level the playing field in the application process, allowing students from under-resourced schools to gather the same information as those from highly funded schools and receive similar assistance navigating an already complex process.
Additionally, colleges and universities can use AI to draft digestible content for applicants using their website information and other sources. AI can be directed to pull directly from internal school directories rather than external sites, increasing the potential for accuracy of this information. This allows them to use AI to help create tailored information for applicants about what their program is and how it has to offer or what it has to offer that may be otherwise time consuming to create versus just edit.
Overall, there’s an opportunity on both sides –– applicants and universities –– to use AI to improve the overall efficiency of the college admissions process. However, we need to remember AI itself is not really capable, at this moment, of writing long-form content such as an applicant’s application essay in a way that truly represents them. It’s always going to be important to ensure that both applicants and programs are reviewing all content generated by AI, and using it more as more of a directional compass rather than a full map of the process.
DJ: How can these emerging technologies be used to get a clearer picture of an applicant and how can it support students once they are admitted?
Dore: AI and other language processing technologies can be used to review and digest large amounts of information, which can be helpful during the application process. It can review reference letters, personal statements, and more. But, once a student is admitted, one specific area where these technologies could make a big impact is in providing guidance to universities on how they can better support students throughout their entire educational journey. Student data could potentially be fed into a model and leveraged to identify curriculum gaps or assessment needs across cohorts, to inform institutions’ continuous quality improvement efforts.
Emerging technology like generative AI can also help identify the level of support students most often need once they’re enrolled in a particular program. AI can help segment students and predict their potential success by evaluating factors such as whether a student is first generation, what school district they came from, and more. This info can also be used by institutions to cultivate a more holistic sense of inclusion and ensure greater student success across the entire student body.
DJ: Are there any significant drawbacks or risks associated with AI’s use by higher-ed institutions?
Dore: While AI’s use cases can be exciting, there are a few drawbacks that institutions must be aware of whenever they’re using this technology –– or any new and emerging technology. For starters, it’s important to remember that AI is not a one-size-fits-all solution. AI can also be a bit of a black box, where information goes in and outputs comes out. We don’t always know what happens in the middle of that process. Models can be ripe with bias, and because of this, institutions must always establish their own ethical frameworks for using the technology.
Take the evaluation of personal statements in the admissions process as an example. We know that these tend to be less personal statements and more like “village statements.” This is because applicants will have friends, family, and even teachers review these materials –– and as a result, they’re often biased in their output. This is made even more problematic when a student is applying for a particular field of study, and the individual reviewing is a current professional in that field. In this instance, the university will learn more about the person reviewing than the actual applicant themself.
Again, machine learning and natural language processing have biases because they’re trained and learn based on the information given to them by humans. If models are given personal statements like the one described above with no specific guidance or context, it’ll maintain existing biases. Because of this, even if a model is incredibly well trained, it can and will miss incredible applicants who are then denied the opportunity to enroll or admit people who may not be aligned with a school’s program or mission.
Overall, AI does have an opportunity to create efficiencies and a process that is very overwhelmed and burdened at the moment. However, these drawbacks must be well-known and addressed by any institution hoping to leverage the technology.
DJ: Universities are struggling with employee retention, particularly in admissions departments – might this temp some to turn to AI and ML models too quickly/before it’s ready for ethical use?
Dore: Absolutely, I think people are sometimes looking for machine-generated solutions to human problems. If there’s a retention issue coupled with a lack of resources to improve it, admissions departments may turn to AI and machine learning, not just before it’s ready for ethical use, but even before they truly understand how best to use it in their processes. It is not a one-size-fits-all, cookie-cutter solution. It’s better to be used in specific areas of admissions, and not others. For instance, tasks where AI can be utilized to decrease the amount of initial work is where it can be best used (tasks that require human oversight in the process) and it can be a great tool to create efficiencies in the admissions office.
However, one of the main reasons university employees tend to be involved in admissions is that they’re very aligned with the mission of the organization and the institution. But most of all, they want to make a real difference. Therefore, the use of AI could turn these employees off if they feel that AI is a black box that’s being imposed over the admissions process, rather than really feeling like it’s serving a mission. This will only add to the already existing retention and burnout issues among admissions teams.
DJ: What is the significance of implicit bias training for those creating AI modes in admissions offices?
Dore: It’s important for everyone to understand that we all have implicit bias. This isn’t necessarily all due to the decisions we make in our everyday life, or because of how we were raised. Rather, all of the social media and content that we’re exposed to on a day-to-day basis contribute to forming of these biases, and most of the time we’re not even aware of it.
The importance of implicit bias training for those creating AI is not necessarily to eliminate bias, as there’s no demonstrated evidence that you can get rid of the bias, but to create awareness of the bias. When using AI, even if it’s to develop content, schools can go back and review that content with a keen eye and awareness of the biases that may have been put into it. This also reinforces the notion that a real human being must be involved if AI is used in any stage of a process. It’s also especially helpful when individuals are able to understand the depth of the biases that may exist within an AI model – implicit bias training can create awareness of this as well.
Ultimately, AI can be harmful to the admissions process if it’s applied too broadly like that of a brush strokes on a painting. Rather, it should be used more like a highlighter.
DJ: Will the technology ever be able to fully automate the entire admissions process in your view or will a human touch always be necessary?
Dore: Universities will always need a human touch to stay aligned with their mission. For example, using AI alone in recruitment efforts, without human oversight, will feel automated rather than personalized. This will cause the individuals involved to feel disconnected from the school’s overall mission, which will lead to a loss of applicants and staff. Therefore, it’s critical that schools ensure connection and alignment whenever using AI technologies.