During last week’s meeting of Southwestern College’s Governing Board, trustees unanimously approved a slate of contracts. Included among them was a contract with N2N Services to subscribe to a software program called LightLeap AI.
Like many community colleges across the country, in recent years, Southwestern has been inundated with fraudsters who marshal legions of stolen identities to swindle financial aid funds. Much of that fraud has been made possible because new AI tools allow fraudsters to employ increasingly sophisticated methods to steal funds. The fraudsters use AI to complete assignments, take tests and even send emails to professors.
The trend has devastated many community colleges. Students have been locked out of classes and teachers, now forced to moonlight as de facto bot detectors, have been stretched thin. Many staff members at Southwestern have been left frustrated that the college hadn’t done more from the get-go.
But with the adoption of LightLeap AI, Southwestern has its newest – and most high-tech – weapon yet in the battle against the bots. The growth of LightLeap AI, which has now been deployed at 36 community colleges in 20 districts, also seems to signal that in a world ill-suited for the wave of upheaval brought on by AI, one of the technologies’ best uses is to hunt down rampant AI-powered fraud.
All of this made me very curious about how exactly this software works and what those who run it have been seeing. So, I spoke to N2N Services CEO Kiran Kodithala about how LightLeap AI works.
‘Can AI Help Solve the Problem of Fraud?’
When N2N was founded in 2010, the company’s goal from launch was to “connect anything to anything,” Kodithala said. Many systems, especially at places like community colleges, were siloed. N2N wanted to make it easier for everyone from students to administrators to access various sources of information more simply.
When AI began to pop up in earnest in 2023, N2N decided to create a chat bot that would serve much the same function – connecting students to all manner of information with just a query. But when they approached some community colleges, they were confronted with an entirely different query by administrators: “Can AI help solve the problem of fraud?”
Kodithala and his team were intrigued by this, so in partnership with Foothills-De Anza Community College, which is nestled in Silicon Valley, started to work on an AI model that could respond. But they immediately encountered a problem.
“When we were building it, even Foothill-De Anza did not know who the fraudsters were, so we had no way of training based on their findings,” Kodithala said.
What they ended up doing was combing through the community college’s enrollment data and create an entirely new fraud-marking system. So, they fed not only active enrollment data, but enrollment data for previous Foothill-De Anza semesters into their system so they could check their work against fraudsters previously identified by the college.
Early on, Kodithala said they made a clear distinction between what he called a “lazy student,” who just wanted to sign up for classes and get aid funds and a fraudster engaged in “pure identity theft.” Elements of the former have happened for years, but it was the latter that was taking off. That’s exclusively what N2N wanted their software to focus on.
So, they began to employ a clustering method, identifying groups of fraudulent applications that come from the same IP address, or use the same phone number, email address or physical address. This information remains relatively static, with fraudsters reusing them even as they cycle through identities. Kodithala said that’s because while new stolen identities are easy to find, fraudsters are less able to generate new email or IP addresses each time they try to swindle funds.
Kodithala said that method, and other strategies he was more reticent to share for fear of tipping off fraudsters, yielded big time. LightLeap AI began to flag over 200 percent more suspected fraudsters than Foothill-De Anza’s homespun system. As it stands, the company has processed close to 3 million applications and identified about 360,000 suspected fraudsters. All of those applications, including the roughly 12 percent identified as suspected fraudsters, had already made it through the California community college system’s statewide security screenings.
‘One in Every Other Application Is Fraud’
Exactly how much fraud each community college is seeing varies wildly, Kodithala said.
“We are seeing still after the states spam checker and other tools, some institutions where there’s 60 percent fraud. At some institutions one in every other application is fraud,” Kodithala said. Some other community colleges, however, are seeing closer to 15 percent of their applications be fraudulent.
Why that variation exists isn’t entirely clear to Kodithala. But there are actually some built-in incentives to allow fraud. The state’s funding formula for community colleges, for example, grants additional funding based on how many full-time equivalent students enroll. That incentive may have left community college administrators unsure of what exactly was happening as the fraud ramped up.
“I’m not sure whether they knew what they were seeing, because on one hand you want to see that my enrollment is increasing,” Kodithala said. “The example is probably the frog in the boiling water where just the temperature generally increased … and they were trying to chase it, and kept hoping and that they will eventually catch it but now, now it’s come to its head and they all realize that this is a huge problem,” he said.
Recent coverage of fraudsters has led to calls for investigation both from Republican U.S. Representatives and state-level politicians. Those calls specifically accuse community colleges of “allowing fraud to go unaddressed,” and encourages the Trump administration to “take immediate action.”
Statewide community college leaders have pushed back on the characterizations to CalMatters. They argue that while fraud is a legitimate concern, the system has allocated more than $150 million toward cybersecurity in recent years and that the funds stolen represent a small fraction of the funds disbursed to real students.
Nuclear Weapons
While financial aid fraud is a huge problem for community colleges, it’s likely also a boon for companies like N2N and their products like LightLeap AI. Because even though most AI evangelists may look on the use of the technology to swindle and defraud with distaste, detecting all that swindling and defrauding may prove one of the best uses for AI – aside from the swindling and defrauding, of course.
The sheer number of bots and the complexity of scammers’ networks has outpaced what individual humans can realistically handle. The technology can allow fraudsters to generate fake driver’s licenses or even use video software to take part in identity verification calls. Kodithala estimated it takes a human 15 hours of calling phone numbers, checking addresses on Google Maps and sending bank verification links to catch each fraudster. That may be a dubious calculation, but it doesn’t change the fact that AI platforms can do that work almost instantly.
So, in effect, AI-powered fraudsters necessitate the need for AI-powered detectives. What’s created is a machine learning feedback loop, a literal Blade Runner situation, an ouroboros of slop (picture an algorithmic snake eating its own tail).
Kodithala believes deeply in the transformational potential of AI. And despite the potentially disastrous consequences of its use in education settings, he’s not alone. Advocates are rushing to inject the technology into education faster than regulators can erect guardrails to protect from any negative impacts. Just last week, President Donald Trump signed an executive order directing agencies to prioritize the integration of AI into K-12 schools. These decisions will impact generations of children to come.
Technology is kind of always like this, Kodithala said. Things move fast, faster than regulations can keep up – and some people are bad, so when a new technology pops up, those people will adopt it more quickly. It becomes the responsibility of the good guys to be nimble to address it.
“It’s not like nuclear weapons are the problem or dynamite itself is a problem. It’s how we use it,” Kodithala said.
In other words, the only way to stop a bad guy with AI is a good guy with AI.