As socialbots become commonplace, the need to define our ethics in producing such ‘beings’ will surface repeatedly
Published Date – 5 February 2024, 11:59 PM
By Pramod K Nayar
Bots abound. Bots respond, compose, calculate, talk among other functions. They are integral to the digital lives we lead, from consumer care services to socialising operations.
The bot is a piece of software that can pass off as a human and perform repetitive functions in a computer-driven network’s processes. It posts messages, for instance, on social media platforms. Common messaging platforms use these, but organisations also use bots to provide public information (the WHO famously created one to share information during the pandemic).
Bot Self
Socialbots are different from other bots because they are stealthy and can masquerade as a human on social media platforms by doing what humans do: responding to messages and posting them. The socialbot can, therefore, control an account on the platform, and although it is an automation software, for all practical purposes, it behaves as though a human is doing the work.
In effect, then, socialbots are ersatz humans — substitute humans, or a set of functions that taken together give the illusion of being a human. They mimic human behaviour, speak like humans. Like humans, they send out friend requests, post and repost and follow. They shape the nature of the conversation on social media platforms through their interactions which are effectively indistinguishable from that of other human users. The users, in other words, treat the bot as a person with personality and behaviour traits.
The bot appears to have intentions, method, rational thinking and human-like responses. In particular, chatbots and intelligent agents (Siri, for example) are able to manipulate and organise the social communications systems. As an instance of the enormous power of bots, one study conducted in 2009 of 11.5 million Twitter accounts discovered that 24% of all tweets generated in the period January-May 2009 were coming from automated accounts posting over 150 tweets a day!
The bot exhibits what media researcher Tania Bucher terms ‘botness’, defined as ‘the belief that bots possess distinct personalities or personas that are specific to algorithms’, and so the user senses a bot Self, since the socialbot is akin to what we understand as a Self. The bot Self is something (someone) with whom other users of the platform can build a conversation and therefore experience sociality. Where humanness ends and botness begins is, therefore, a moot point.
Bot Sociality
Socialbots have been found to bypass the security systems in social media platforms like Facebook, set up fake accounts and obtain access to personal data. Studies show that millions of friend requests emanate from such bots.
That is, socialbots determine the nature and quantum of social links and networks of humans because they actively intervene in the foundation of community formation: communications (‘community’ and ‘communication’ have the same roots). Socialbots align and bring together people on social media platforms. They enable impression management and even shape political opinions. In the latter domain, studies have demonstrated that politicians have employed bots to push their agenda on social media platforms, even going so far as misdirecting (directing people away from ‘difficult’ topics of, say, the rate of inflation or unemployment), spreading misinformation and targeting their adversaries, and projecting the illusion of a vast groundswell of support for themselves.
Socialbots thus shape public opinion, ‘manage’ impressions and serve those in power and who invest in the bots. Whether social interactions with bots are meaningful or not, whether the users are being manipulated into sociality, are questions to which easy answers are not yet available. But what is clear is: the extensive use of social media platforms as the foundation of sociality itself ensures that human interactions will be/are shaped by bits of software.
In some cases, as Adrienne L Massanari, a professor of communications, argues in the case of Reddit, the bots are seen as annoyances. She argues that:
Redditors will often branch out in their conversations to discuss other topics entirely. Bots are unlikely to be programmed to understand the subtleties of these kinds of exchanges, and thus their contributions will often be perceived as unwelcome or intrusive.
That is, the bots are characterised by an absence of tangential thinking, spur-of-the-moment diversions and subtlety. Whether with the growing dimensions of AIs, these bots will stay that way remains to be seen.
Bot Botched
But bots have also been botched.
In 2015, communications studies scholars Frauke Zeller and David Smith created hitchBOT, a solar-powered iPhone enclosed in a fancy and attractive anthropomorphic casing. The bot was sent hitchhiking, and made it across Canada, the Netherlands and Germany, and finally into the US. It had, according to one description, ‘a digital smile, an extended thumb, and a sign that declared “San Francisco or bust”’.
The purpose was to see how humans would interact with the humanoid. Its social media presence gave it a human interest angle. While it did not exhibit feelings or reason — the two standard test criteria that enable humans to determine if the device/object can be treated as a nonhuman ‘person’ — humans who encountered it by and large treated it with curiosity and a degree of friendliness. Some took it to a wedding, others to a comics convention. hitchBOT was also taken to a baseball game, and even an ocean cruise. As Zeller and Smith wrote in the Harvard Business Review:
hitchBOT not only met with a lot of friendly humans, but also went viral on social media, gaining more than 35,000 followers on Twitter, 12,000 on Instagram, and receiving 48,000 likes on Facebook, all in less than four weeks.
Then in Philadelphia, someone tore hitchBOT’s head off, mangled it and left it by the wayside. hitchBOT lasted two weeks in the USA.
As one headline declared: ‘Innocent Hitchhiking Robot Murdered by America’. The term ‘murder’ used in the title indicates that, to some, hitchBOT was near-human, (this resonates with the fictional character, Alan Turing, accusing Charlie Friend of ‘murdering’ the humanoid robot, Adam, in Ian McEwan’s novel, Machines Like Us).
For some, the treatment the bot received was an index of human society’s distrust of Artificial Beings and robots. Anthropomorphic and endearing, responsive and ‘cute’ (as hitchBot was described), it was akin to humans in many ways. If people on social media are so used to bots that have botness and a bot Self, why was hitchBOT not given a similar status of a near-human, the philosopher Michael Labossiere wonders.
As socialbots and Artificial Beings become commonplace, the need to define our ethics in producing such ‘beings’ and towards them — especially in the context of instances like what happened to hitchBOT — will surface repeatedly.
To bot or not to bot, is our new question.