Loading Now

Musubi, AI content moderation startup, raises $5 million in seed funding


As policies for content moderation and fact-checking enter a new era, one startup is turning to artificial intelligence, rather than humans, to enforce trust and safety measures. 

Musubi, a startup that uses AI to moderate online content, has raised $5 million in a seed round, the company told CNBC. The round was led by J2 Ventures, with participation from Shakti Ventures, Mozilla Ventures and pre-seed investor J Ventures, the startup said.

The company was co-founded by Tom Quisel, who was previously chief technical officer at Grindr and OkCupid. Quisel said he saw an opportunity to use AI, including large language models, or LLMs, alongside human moderators to help social and dating apps “stay ahead” of bad actors. Musubi’s AI systems understand users’ tendencies better and more accurately tell whether there’s bad intentions with users’ content.

“You pretty universally hear that trust and safety teams are not happy with the quality of results from moderators, and it’s not to blame moderators,” said Quisel, who co-founded the company alongside Fil Jankovic and Christian Rudder. “It’s exactly the kind of scenario where people just make mistakes. It’s unavoidable, so this really creates an opportunity for AI and automation to do a better job.”

During his time at OkCupid, Quisel said it was a “Sisyphean struggle” moderating bad actors. The effort required OkCupid to pull engineers, data scientists and other product staffers off core projects to work on trust and safety, but blocking one pattern of attack never lasted long enough, Quisel said.

AI content moderator, Musubi, team photo.
Courtesy: Tom Quisel | Musubi

“They would always figure out how to get around the defenses we built,” he said.

Post Comment