Now the robots can lie to us?

Robots are becoming more human every day. Some robots can already sustain damage and reconfigure themselves, kind of like how our bones heal after we break them. Now others can deceive other intelligent machines and even humans.

Researchers at Georgia Tech have developed algorithms that let robots determine whether they are in a situation where they should deceive other robots or humans.

To test the team’s algorithms, these robots participated in a series of hide-and-seek games. During the games, colored markers were lined up along three potential pathways to locations where the robot could hide. The hider selected a location from the three location choices, and moved toward that point, knocking down markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations, making sure not to hit any markers by its actual hiding spot.

Developing this algorithm required interdependence theory and game theory that tested the value of deception in a specific situation. The game satisfied two key conditions that the robots needed to warrant deception: There must be conflict between the deceiving robot and the seeker, and the deceiver must benefit from the deception.

While hide-and-seek provides a controlled setting, the logic of strategic benefit is already a well-established component of the broader digital landscape. This type of mathematical modeling is a staple in high-stakes environments, ranging from automated stock trading tools to popular betting sites and social media engagement feeds. In each instance, the system relies on its ability to assess a user’s likely moves and adjust its internal calculations to ensure a specific, advantageous outcome.

Link

the monochrom blog - archive of everything