Andre Perry, a David M. Rubenstein Fellow at The Brookings Institution, was a panelist at TC’s conference on Artificial Intelligence in Education, held in the College’s Smith Learning Library on September 20.

In the following opinion piece, produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education, Perry – who writes the “Degree of Interest” column for the Hechinger Report – writes that because the goal of artificial intelligence is to make computers that think and “make human-like judgments,” AI designers need to be careful to leave human biases out of their new designs. Failure to do so would not only replicate the biases of the (mostly white) programmers who design AI. It could amplify those biases in the real world, Perry warns. To end up with AI that works for all students, Perry writes, people of color need to be enlisted in its development. Just as AI can give us a window into how humans learn, it can also help expose the racism that holds students of color back. “When we better understand how, when and where people learn to be racist, then we can build a justice app for that.”

Andre Perry

Andre Perry (Photo credit: Bruce Gilbert)

From driver-assisted car systems to video games and virtual assistants like Alexa and Siri, artificial intelligence (AI) has transformed almost every aspect of our lives, as our machines learn from the massive amounts of data we provide them.

The goal is for our computers to make humanlike judgments and perform tasks to make our lives easier, but if we’re not careful, our machines will replicate our racism, too.

Kids from black and Latino communities — who are often already on the wrong side of the digital divide — will face greater inequalities if we go too far toward digitizing education without considering how to check the inherent biases of the (mostly white) developers who create AI systems. AI is only as good as the information and values of the programmers who design it, and their biases can ultimately lead both to flaws in the technology and to amplified biases in the real world.

This was the topic at the conference “Where Does Artificial Intelligence Fit in the Classroom?” put on by the United Nations General Assembly, the United Nations Education Scientific and Cultural Organization (UNESCO), the think tank WISE and the Transformative Learning Technologies Lab at Teachers College, and hosted by Teachers College, Columbia University this month. (The Hechinger Report is an independent unit of Teachers College.)

While many argue that the efficiencies of AI can level the playing field in classrooms, we need more due diligence and intellectual exploration before we deploy the technology to more schools. Systemic racism and discrimination are already embedded in our educational systems. Developers must intentionally build AI systems with a racial equity lens if the technology is going to disrupt the status quo.

Previous attempts at making education more efficient and equitable demonstrate what can go wrong. Standardized testing promised an innovation that was irresistible to an earlier generation of education leaders hoping to democratize the system. As Nicholas Lemann put it in his book “The Big Test,” about the development of the SAT, such assessments promised to evaluate “all American high-school students on a single national standard and then [make] sure that they went on to colleges suited to their abilities and ambitions.” Later, standardized tests allowed schools and teachers to be held accountable when students didn’t measure up to expectations.

But the designers and implementers of these assessment tools didn’t consider how the racism and inequality rife in U.S. society would be baked into the tests if care wasn’t taken to make them more fair. SAT and ACT tests are good proxies for wealth. Overuse of these tests has helped concentrate wealthy people in selected colleges and universities, stifling the inclusion of and investment in talented people who happen to be lower income. The College Board, the nonprofit that prepares the SAT, announced a patch for this problem in May: the planned rollout of an “adversity score” assigned to each student who takes the college admissions exam. The score was to be comprised of 15 factors, including neighborhood and demographic characteristics, such as crime rate and poverty, and to be added to each student’s result However, the College Board retreated from their plan, bending to a wave of criticism.

Current attempts to introduce AI in schools have led to improvements in assessing students’ prior and ongoing learning, placing students in appropriate subject levels, scheduling classes and individualizing instruction. Such advances enable differentiated lesson plans for a diverse set of learners. But that sorting can be fraught with errors if the algorithms don’t consider the nuanced experiences of students, especially those who are starting at the bottom versus the top.

The spread of AI technology can also tempt districts to replace human teachers with software, as is already happening in such places as the Mississippi Delta. Faced with a teaching shortage, districts there have turned to online platforms. But students have struggled without trained human teachers who not only know the subject matter but know and care about the students.

Over-zealous tech salesmen haven’t helped matters. The educational landscape is now littered with cyber or virtual schools because ed tech companies promised that they would reach hard-to-educate as well as black and Latino students and create efficiencies in low-funded districts. Instead, many of the startups have been hit by scandal, including a pair in Indiana that were forced to close down.

Yet AI could provide real benefits. AI in the classroom could free up teachers from time-consuming chores like grading homework. It won’t work if it’s intended as a way to avoid the hard work of recruiting enough skilled teachers, especially teachers who look like the kids they’re working with. For the rise of robots to equate to progress, teachers should experience improved work conditions and increased job satisfaction. AI should reduce attrition and increase the desirability of the job. But if technologists don’t work with black teachers, they won’t know what conditions need to change to maximize higher order thinking and tasks.

We must diversify the pool of technology’s creators, incorporate people of color in all aspects of its development, continue to train teachers on its proper usage and build in regulations to punish discrimination in its application.

The true promise of AI is to give us insight into how students and teachers learn — including the racism that keeps needed resources from schools in which the majority of students are people of color. When we better understand how, when and where people learn to be racist, then we can build a justice app for that.

— Andre Perry

More from the AI Conference: