AI is really dangerous?

This is a challenging question, so let’s start with how we understand artificial intelligence. AI refers to the ability of machines or computers to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and natural language processing. It involves developing software or systems that can analyze data, recognize patterns, make decisions, and improve performance based on experience without explicit instructions from human operators. AI is often used in robotics, healthcare, finance, and customer service. The goal of AI is to create machines that can perform tasks that were previously only possible for humans.

If we’re on the side of AI, let’s say we are AI, we have been created like god created human beings. We, humans, are reasonable and intelligent beings, we have the will to decide for ourselves, or at least that’s what we think. Aren’t we the god of artificial intelligence? According to the modus operandi of AI, it is developed with a supposed freedom of will, just like us with our Creator.

Nowadays, we have people like Kondo, one of at least 100 people referred to as “fictosexuals,” who unofficially married fictional characters, according to NYT. According to The Mainichi, Kondo loved Miku’s somewhat robotic Vocaloid voice. Vocaloid is a Japanese voice synthesizer software used to give a literal voice to cyber celebrities like Miku. Psychologically, this is objectophilia, a rare condition in which a person develops strong romantic or sexual attractions to inanimate objects.

I wouldn’t consider a person in love with an AI a danger since it doesn’t represent a direct threat to the person per se or anyone. The real danger lies not in humans intervening with AI but in what it can do independently.

AI can create other AI by itself, a process known as “automated machine learning” or “AutoML.” AutoML algorithms use complex techniques like reinforcement learning and evolutionary algorithms to automatically generate and improve machine learning models without human intervention. This process of autonomous AI development can speed up the creation of new AI applications, improve the accuracy of existing models, and increase the efficiency of AI systems. However, it is essential to note that even autonomous AI development still requires human oversight and ethical considerations.

“That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments,” the Massachusetts Institute of Technology cognitive scientist mused… “However useful these programs may be in some narrow domains,” Chomsky notes, there’s no way that machine learning as it is today could compete with the human mind.

As the public intellectual writes, headlines about AI coming for our jobs and taking over our future are like something out of a tragicomedy by Argentinian writer Jorge Luis Borges — and should be taken as such. “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” Chomsky expounds. “On the contrary, the human mind is a surprisingly efficient and elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.”

While currently, available AI chatbots may seem to mimic human creativity and ingenuity, they are doing so only based on statistical probability and not as a result of the kind of more profound knowledge and understanding that belies all human thought processes and are this way “stuck in a prehuman or nonhuman phase of cognitive evolution,” Chomsky argued.

“Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round,” Chomsky notes. “They trade merely in probabilities that change over time.” “For this reason,” he concluded, “the predictions of machine learning systems will always be superficial and dubious.” In other words, the concept that these AIs will take over the world is impossible, with that absolute lack of human-like understanding of how the world works.

AI will be dangerous until we let them be. Elon Musk and Warren Buffet have their opinions regarding this, and Buffett is right when he says that AI is similar to an atomic bomb. I say it will not explode if no one presses the button.

Advertisement
Privacy Settings

Leave a Reply