In reading Naomi Klein's lengthy but very good article on the takeover of AI, she quoted Geoffrey Hinton as saying we won't be able to distinguish reality from fake, leading to a catastrophic existential crisis - the end of humanity as we know it. I have to think this is a bit overblown as we have been dealing with historic revisionism, fake and misleading news stories for centuries, not to mention all kinds of forgeries that many persons take to be real like the "Shroud of Turin." AI may make it easier to peddle this kind of bunk, but a hell of a lot of people have been misled to one degree or another for a very long time.
While it is refreshing to note that only 20 percent of Americans take the Bible to be literal, down 4 percent from 2017, that's still a hell of a lot of deluded folks out there. Nevertheless, these mythmakers persist. There is actually a field of Biblical archeology with its own society that tries to prove Biblical events like the Flood in their continual pursuit of the literal word of God. This despite Maimonides having cautioned readers with his classic guide to the Bible in the 11th century, in which he noted repeatedly the use of allegory and fable to get across larger ideas.
Why these myths persist is relatively easy to explain. People want to believe, and they will believe almost anything if it fits their cognitive bias, a term that gained wide circulation during the 2016 Trump campaign, in which he spouted an endless litany of false and misleading stories that many Americans chose to believe over the numerous fact checks that were provided.
AI can be used to further prey on this cognitive bias. We see it in the way computer algorithms feed us stories based on our personal search history, essentially consigning us to our little bubble. IT specialists have been warning about this for years but it has largely fallen on deaf ears because Google and social media sites allow advertisers and political campaigns to target their audiences with these algorithms.
The most notorious case being Cambridge Analytica, which was able to influence elections all around the globe, including the 2016 US election, by harvesting personal facebook accounts and feeding directly into these cognitive biases. Interestingly enough this didn't work so well in the 2020 election. People seemed to be a little more circumspect of what they saw on their facebook timelines or pulled up in Google searches after the Cambridge Analytica scandal broke. It seems most persons were still able to discern fact from fiction, at least to some degree, as Trump was not able to get away with the same lies, defeating Goebbel's adage that if you repeat a lie often enough it becomes truth.
Nazis were effectively able to dupe the German public, or at least enough of it to gain power in 1933, without any use of AI. They understood the power of feeding into people's worst fears and the use of spectacle to drive their points home. We look back and marvel at how an ugly little man like Hitler was able to get away with it, but you will find such petty tyrants everywhere. The only thing that stopped Trump from declaring martial law after his 2020 election defeat was his own cowardice. We can only hope that members of his cabinet and the military wouldn't have followed through with it.
I suppose AI is a game changer in that it is significantly more advanced than these previous propaganda efforts, and would allow a tyrant with a sharper analytical mind to gain greater influence over the public. Makes you wonder what Elon Musk is up to in launching his own AI company? Fortunately, he could never be president, being born in South Africa, but he could help someone who would serve his interests become president.
As it is, Elon has his hands in a lot of pies, largely due to the raw materials and manpower necessary to achieve his vision. What that is, no one is exactly sure, as he is prone to erratic behavior that often undermines his efforts. His takeover of Twitter being one example. However, his admirers believe he is so much more advanced than us mere mortals that this takeover was all part of some grand strategy that we are simply unable to comprehend. I suppose bringing Tucker Carlson on board is part of this great vision.
There was an acronym in the early days of computer programming - GIGO - garbage in, garbage out. That's why I don't get too worried about AI. We already see it when we text messages or type our responses on social media. AI tries to guess the words we are typing and even the next word, largely based on what we have typed before. It even tries to translate long passages in a wide variety of languages, including Lithuanian. Amazingly, it gets better and better to the point translators often use a google translation as their base text to speed up their work. In that sense it is a useful tool. Whether it can become truly cognitive is another matter.
Geoffrey Hinton thinks so. He modeled his neural network along the same lines as the human brain, largely because he wanted to better understand how the brain works. As a result, he came up with the best computer algorithm yet, which is why Google paid him big bucks. It also seems they made him sign an NDA as he only speaks appreciatively of his former employer, even as he talks ominously about the consequences of AI, much like Oppenheimer did the atomic bomb.
As Naomi Klein pointed out, we learned to live with the atomic bomb and imagines it will be the same with AI. It's analytical models won't solve world hunger or climate change, as it takes humans to initiate these efforts. Instead, CEO's will use AI to further their personal and stockholder interests in their companies as they do with any new technology because that's the way mere mortals work.
It's also why we believe in gods. We would like to think there is someone or something out there that is above all these base instincts. Some would like to think that AI will bring us closer to this godhead but all it does is just further our delusions or "hallucinations" as Klein wrote in her article, which is a term IT specialists use when AI generates complete nonsense. In the end, one feeds into the other. The Bible or any religious text was not written by God but rather mortals pretending to be God, or at least trying to imagine what God might be like. The same goes with AI.
Comments
Post a Comment