Facebook explores simulations in fight against real world bad guys

0
225

Facebook launched a project Thursday that will let researchers run simulations to learn about harmful user behaviors and the best ways to address them.


Angela Lang/CNET

For you, Facebook might be a virtual campfire where you gather with your family and friends to share stories. But the social network is also a digital Wild West, filled with scammers, networks of fraudulent accounts and straight-up bullies. 

The company has spent more than a fistful of dollars to round up these baddies. It’s tough to win the fights fast enough and bad actors can change up their tactics when their old tricks don’t work anymore. Now, Facebook is turning to a simulated platform to tame it.

The social network introduced on Thursday an AI-driven system that simulates user behavior through Facebook’s real world system. The idea behind the research project, called Web Enabled Simulation, is to try out several possible approaches to dealing with harmful behavior simultaneously. Facebook will partner with academic researchers to test out the program and explore possible responses to problems. The goal is to find the most effective choice with the fewest drawbacks.

The technique allows researchers to observe simulated Facebook activity on a prototype that the company has dubbed WW. Researchers can use various techniques, including machine learning, to gather data on WW. Isolated from the Facebook most of us see, users are represented by bots powered by AI algorithms. Some bots act like regular people, but others engage in all sorts of nefarious activities, like looking for innocent users to scam. That will show researchers where vulnerabilities are, and allow them to introduce new restrictions on user behavior and see how the bad bots respond.

“We can, for example, create realistic AI bots that seek to buy items that aren’t allowed on our platform, like guns or drugs,” Mark Harman, a Facebook research scientist, said in a blog post introducing the Web Enabled Simulation program. “Because the bot is acting in the actual production version of Facebook, it can conduct searches, visit pages, send messages, and take other actions just as a real person might.”

It isn’t a stretch to say that Facebook hopes to learn to corral its platform by studying its own Westworld. 

Presumably, the bots won’t become sentient, rise up and cause a multi-season moral quandary. But Harman called the research environment “Westworld” in an interview. (Westworld isn’t the public name of the platform.) The analogy makes sense because the platform lets researchers step out of time, trying approaches that might not work without any real users around to deal with the consequences.

Harman said the WW simulator is versatile in the kinds of problems it can model. Scammers, for example, have predictable behavior patterns when compared to other activity, such as fake accounts that pose as people in the US but are actually controlled out of Nigeria or Ghana at the direction of a Russian intelligence agency. (And you thought the second season of Westworld was complex.) Harman says WW can model either situation.

WW can also tackle subjective situations, like content that’s flagged for abusing Facebook’s community policies. Human moderators in high-stress, low-pay work situations currently deal with these complaints. Harman said WW can’t apply the subjective standards on its own. But it can study how moderators have handled complaints in the past and learn how to apply the standards broadly.

The end result will still be “an approximation,” Harman said, meaning humans will be needed to make sure the judgments are correct. 

WW relies on three techniques to simulate user behavior on the platform. The crudest is called a rules-based algorithm, which lays down if-then situations to direct bot behavior. That’s where researchers work with what they already know about harmful user behavior patterns. 

More advanced simulation techniques are supervised and unsupervised machine learning, which can use information on established behavioral patterns, too. However, they also let researchers learn to anticipate new behaviors that bad actors might try down the line. For example, researchers could give a bot one goal: getting other bots to fall for a scam. The bot doesn’t have to follow strict rules to achieve the goal, which can reveal weaknesses real bad guys haven’t thought of yet.

The end goal, Harman said, “is to find a mechanism that will thwart a real user that has a similar intention.”

If WW works, Facebook will be quicker on the draw when a new bad guy rides into town.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here