LEAVE TO MY OWN DEVICES: This time it’s okay to use “slippery slope” | columns

0


Artificial intelligence is a growing subgroup of information technology. It’s a fascinating phrase – “artificial intelligence” – just in my reading of the words. For example, how could everything that is considered intelligent be artificial at the same time? At the original root of the word “intelligence” is the ability to understand: Latin intelligent. We are all intelligent to some degree because we can understand things. We can communicate because we can understand.

The AI ​​version of intelligence, which includes the “artificial” component of the term, indicates intelligence that is produced by humanity and is not a product of the natural events of life. Artificial sweeteners make diet soda a thing. Artificial turf transforms a concrete base into a lawn. If you want to get lost in a logical downward spiral, you should be aware that these artificial versions of natural events can only become part of our reality because of the intelligence of mankind, some of which have come up with and then created sugar or grass substitutes. Without human intelligence … no artificial intelligence.

Technology forecasters will make you understand that AI will be a normal and perhaps ubiquitous part of IT for the years to come. It will be one such movement that dates back to the 1990s when computers were themselves part of the average household. A wave of IT swept modern society, so computing power was no longer limited to governments, universities, and large corporations. That era represented some kind of revolution, technologically speaking, and AI wants to have the same scope and power in its effects. So many say.

The man-made insights programmed into AI devices lead to intelligence that is demonstrated by machines. In the academic world, researchers refer to the AI-enabled machines as intelligent agents. Leave it to scholars to define “AI” with what may be abbreviated to “IA”. Persistent arguments, these types of researchers tend to be. They view the spectrum of AI as a technology and concept, from the lower end, where computers can perform artificial general intelligence, to the upper, still ambitious, end where artificial biological intelligence is the goal. The more general form of AI we’re used to reading uses computing power to perform statistical methods, examine the data, and predict some outcomes. Mathematics, statistics, engineering and other sciences limit the performance of artificial general intelligence somewhat.

The more intriguing, if not frightening, type of AI that seeks understanding on a biological level relies on human emotions and psychology. In the 1950s, when the first notions of AI were invented, the goal was to precisely simulate human intelligence and its thought processes. That is still a big task 70 years later. However, any dream that has haunted some of the world’s most skilled thinkers for generations is going to have such lofty goals, I suppose. Also recognize that the early, almost incomprehensible goals of AI had already become fictional. What did Viktor Frankenstein of Mary Shelley do in his laboratory if not, to a certain extent, developed an AI device? That was in 1818, and I’ll bet it has a predecessor too.

The torch-and-pitchfork mob according to Dr. Frankenstein’s monster must have intuitively known that mankind’s creation of intelligence was a dangerous endeavor, one of the fitting demonstrations of the often misplaced, usually overused metaphor of the slippery slope. The wheel and fire were good for humanity, sure. Swap storytelling and oral traditions for the written word … good technological advancement. Using energy from nature and generating alternating current and the machines that consume it was a good start for the most part. When exactly did the slope angle change so dramatically that it triggered fears? I think that’s either rhetorical, or at least it could be said that the answer depends on the person answering it. You know someone who thinks they’ll never use a cell phone; or swears by vehicles with manual transmissions; or “TikTok just doesn’t get it.” The slope is as complicated and unknowable as a universal point that all people consider a turning point.

Here is a sense that in this author’s weak mind, AI is turning away from useful, intriguing, and intriguing technology, and note that this is not the old trope, “The robots are coming for us.” The man-made intelligence that has evolved since the 1950s or earlier has not achieved its goals and objectives, but it has been close enough to worry me. Today’s AI researchers have created such believable and effective additions to human intelligence capabilities that even the most gifted scientific and medical researchers, some of whom work and develop AI themselves, are fooled by AI-generated misinformation.

The word “misinformation” has attracted your attention and skepticism in recent years, lest you miss the last two presidential elections. Hell for you when you’re so busy with better things! A number of trade journalists reported last week that scientists in medicine, cybersecurity and almost every field are faced with the challenge of sifting through fake, AI-derived “information” from actual, reliable and valid information. It seems that one way to show off your AI chops is to publish a scientific journal article that is peer-reviewed and which makes researchers exhausted. To sum up, AI has gotten good enough – perhaps think close enough to our own capabilities – that it can generate false information on its own in some of the most critical areas of research such as defense, medicine, and AI. Lewis Carroll himself would be impressed with the results. Remember, the Cheshire Cat, “I’m not crazy, but my reality is different from yours”.

Misinformation to damage reputation or even win an election is particularly noticeable when it works. Think about the threats these seriously critical areas of research pose. We are intelligent and understand enough to know that this slope may not result in a soft landing.

Ed is a cybersecurity professor, attorney, and trained ethicist. Reach him at edzugeresq@gmail.com.



Source link

Leave A Reply

Your email address will not be published.