Response: Artificial Agents Entering Social Networks

September 20, 2012

Artificial Agents Entering Social Networks by Nikolaos Mavridis discusses the idea of robotic agents functioning in a social network system such as Facebook, and whether or not these types of agents can have a meaningful purpose. Mavridis focuses this discussion on “Sarah, the FaceBot robot,” a robot that has both a physical and social networking presence (on Facebook, in this case). Sarah can maintain long-term, multi-interaction relationships with human actors in both physical space and via Facebook, learning to develop conversation and its sense of identity based on previous interactions and through various social data (analyzing facial expression in physical space, as well as data mining information from Facebook friends). While Sarah exists as study in building sustainable relationships between artificial agents and humans, Mavridis discusses several other possibilities for artificial agents in social networks, leaving the discussion fairly open-ended.

One brushed-over suggestion Mavridis mentions is the idea of artificial agents applied to persuade their human counterparts. Although the technology behind a “FaceBot” like Sarah is currently in the early primitive stages (Sarah functionally behaves like a glorified chatterbot, with the main differences being that its discussions draw from previous interactions and mutual friends, all the while it maintains a personal profile page), I can see more developed artificial agents in the future being capable of developing information relationships drawn from a number of diverse sources beyond Facebook friend data, which could pose interesting implications on rhetoric practices.

For instance, what might result from an artificial agent developed to convince people that one political party’s policies were “better for society” than another? Such an agent might draw its knowledge from election poll data, various news outlets, as well as people’s voiced opinions on candidates from social networks like Facebook and Twitter. Would politicians be able to leverage these agents to convince the public that they are indeed the best-suited to run the country? And furthermore, how willing would humans be to accept the discourses suggested to them from the generative dialogue of an artificial mind? Would disguising an artificial agent as a human one be enough to convince people of their legitimacy?

At the moment, the agents described in Mavridis’ paper are not nearly advanced enough to operate on such a level. From the dialogue excerpts provided in the paper, Sarah’s conversations are fairly one-sided and have little focus. An artificial agent with an agenda to be driven is a much more interesting possibility to me, though we hopefully won’t be seeing convincing persuasion bots for quite some time.