Is it still called ego-surfing? That term was coined in the 1990s as more and more people got online, and would put their names into a search engine to see what came up. It soon became clear this was hazardous for authors. A few months after The Thief’s Gamble came out in 1999, I found two very negative reviews. According to one, the book proved I was a patriarchy-enabling betrayer of the sisterhood. The other reckoned it showed I was a ball-breaking man-hater. I was all set to respond, to explain, when a friend working in IT told me to take a breath, step away from my keyboard and think this through. I remain eternally grateful to him for explaining my chances of success were minimal, compared to the significant possibilities of things going badly for all the online world to see. As a more experienced writer told me soon after, ‘Arguing with a critic is like starting an arse-kicking contest with a porcupine. Even if you win, the cost to yourself won’t be worth it.’ The decades since have seen memorable catastrophes when authors have challenged reviews on Amazon and Goodreads.
So no, checking reviews and comments is not what I’m talking about. But another online saying in the 1990s was three things make a post. So here are three solid reasons for writers to stay vigilant over what’s being said about them online these days.
Generative AI has seen an explosion in misinformation. This year’s hobby among writers has been asking ChatGPT and similar for their biography. The inaccuracies that result can be hilarious, as very-far-from-intelligent software scours the Net for anyone with the same name and produces a mishmash of results. After that initial laughter though, this isn’t so funny. How can someone without any prior knowledge of the subject untangle the truth from the nonsense? How can they fact-check when search-engine results are increasingly poisoned by this rubbish?
This gets much worse when some inaccurate statement could have negative professional consequences. Tobias Buckell recently discovered he was being cited as an author praising AI for helping him finish writing a novel, in a lengthy and entirely made-up quote. He was justifiably furious. The excuse that the article was AI-generated so no one is to blame is ridiculous. A human decided to put that lie online – unless no one checked what was being posted, which just makes this worse.
There’s also been an upsurge in online impersonation, especially of literary agents, editors and other people working in publishing. Hopeful writers are being contacted with wonderful offers, and some will be too naive to know this is not how the book trade works. Generative-AI makes these scams more plausible and more common. Writers are being impersonated by scammers creating supposedly new stories in much-loved and long-ago completed series. They find themselves listed as authors of books they have never heard of on Amazon and other sites. These ‘books’ are AI-generated garbage, but how is a reader to know that before buying one and finding out that it’s trash? If the reader doesn’t know what’s happened, the danger of reputational damage for that writer is very real.
Not all of this misinformation can be blamed on generative-AI. I have been checking in on a particular Wikipedia page for over a month now, since I noticed a major rewrite that stripped away an individual’s positive achievements and inserted highly critical and inaccurate material. By which I mean paragraphs that no newspaper’s lawyer would let go to print as some statements would be legally actionable. The person making these edits was doing so under a pseudonym, while Wikipedia culture does not accept the subject of a page making changes themselves. (I have written before about issues with Wikipedia.)
I discussed this with several friends who are active on Wikipedia, who were naturally concerned. They undertook to take a look, and assured me that Wikipedia does have systems to deal with such situations. I have observed these systems in action, and I am glad to say that the page now offers fair and balanced content. But resolving this has taken quite a while, and there have been periods when that seriously inaccurate content remained visible. Two things follow from this. Firstly, if you are the subject of a Wikipedia page, check it from time to time. You need to know if inaccurate material has appeared before you can find help to get the facts straight. Secondly, if you are using someone’s page as a source, and something doesn’t seem right, do click on the Talk tab to look for any current disputes between Wikipedia editors.
In conclusion? All these things strengthen the arguments for an author maintaining and updating their own website, to ensure there is at least one source of accurate and up-to-date information about them online, which they control.
I don’t know if you read Asimov’s Science Fiction magazine. If not, you may have missed editor Sheila Williams’ storm of contempt at all the AI-generated submissions she has to wade through. It was in the Jan./Feb. 2024 issue, now to be found as a small PDF “Chat GPT and Me” in their archives here: https://www.asimovs.com/more-stuff/all-archives/#ArchivedEditorials
I don’t read Asimov’s, so many thanks for this.