AI Takeover: ‘The Good, The Bad & The Unknown…’

“Minority Report”(2002), mall scene Source: TechCrunch.com. 

How far could our lives be transformed by digital technologies?

The world’s shiny new toy: AI 

It was only in 2002 when the world applauded the artistic imagination of Stephen Spielberg’s path-breakingly futuristic sci-fi film “Minority Report”. The Tom Cruise starrer had an iconic scene wherein soon after he walks into a mall, his retinas are scanned by 3D technology, through which the Artificial Intelligence (AI) powered mall screens peek into his mind and call out to him with personalized advertising.

Lo and behold, about 20 years later, we now live in a world with self-driving Tesla cars, virtual assistants like Siri or Alexa, zero human intervention automized Amazon-Go stores, resourceful humanoids and even interactive AI powered chatbots which use natural human language processing like CHATGpt, thereby soaring human reliance on AI to a multi-fold on a day-to-day basis. It is a no-brainer that these facilitative AI-powered tech have eased human life in the most effortless and efficient way. Presentations, creatives and design, laboriously lengthy administrative work, are all processed and executed to an exquisite finesse. 

Yet, as ardent AI users, this leaves us at a crossroads on self reflection. How much of AI-based consumption can be labeled as too far? Would it be an unrealistic hyperbole to foresee the obsolescence of human intelligence over the years and witness the rule of machines like “The Terminator” movie franchise? This article takes a deep plunge into scrutinizing pertinent arenas of basic human value that could notably be influenced by AI. 

Since AI is an emerging technology, there aren’t many studies pinpointing the exact tested effect of extensive reliance on AI by human beings. However, numerous studies on the impact of the internet and technology have been conducted, which can be extrapolated to understand the brevity of how AI can affect us as well. 

Impediment to human cognition & brain development: Reverse Darwinism? 

One of the most significant negative effects of technology on human cognition is the impact it has on human memory. Gone are the days of the yellow pages—the traditional telephone directory system once popular in the ’90s. Families back then, remembered several chunks of landline and pager numbers by heart. Today, with the ease of availing several terabytes of memory at an affordable subscription, the human memory has been slowly chipping away piece by piece. A study conducted by Betsy Sparrow and her colleagues at Columbia University found that people are less likely to remember information if they believe it is readily available online. Sparrow refers to this as the “Google effect,” where we rely on search engines to recall information instead of our own memory.

In addition to memory, technology and AI have also had a negative impact on our attention span. With the constant bombardment of notifications, emails, and social media updates, it can be challenging to maintain our focus on a particular task for an extended period. A study conducted by Microsoft found that the average human attention span has decreased from 12 seconds in 2000 to just eight seconds in 2013, which is shorter than the attention span of a goldfish! Not to mention the toll technology has on our critical thinking skills! Social media algorithms and search engine optimization techniques are designed to show us content that aligns with our existing beliefs and biases. As a result, we are less likely to encounter opposing viewpoints and challenge our own beliefs. This can lead to a lack of critical thinking and reasoning skills, as we are not exposed to alternative perspectives. The overreliance on technology and AI can also lead to a lack of creativity and imagination. While technology has provided us with new tools for creative expression, it can also stifle our imagination by limiting our exposure to new ideas and experiences. Without the opportunity to think outside the box and explore new possibilities, our creative abilities may suffer.

Source: Getty images.

There have been several neurological and brain imaging studies that provide evidence of the ill-effects of excessive dependency on technology, AI, and automation on brain development and human cognition. These studies have used various methods, including functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and behavioral tests. Two studies, which used fMRI to investigate the impact of internet addiction on the brain, found that individuals with internet addiction showed reduced gray matter volume as well as abnormal white matter integrity in areas related to decision-making, emotional regulation, and cognitive control. These findings suggest that excessive internet use can lead to structural changes in the brain that can impair cognitive functioning and emotional regulation. 

In her book “Mind Change: How Digital Technologies Are Leaving Their Mark on Our Brains,” Susan Greenfield writes, “Social networking sites such as Facebook and Twitter have become a pervasive aspect of modern life. These sites are designed to be as compulsive as possible, so we can’t help checking them repeatedly throughout the day, often at the expense of work, study or other activities. But all this activity can have a negative impact on our brains, reducing our ability to concentrate, to think deeply, and to form lasting memories.”

Greenfield also highlights the need for further research into the impact of digital technologies on our brains and calls for greater awareness and education around responsible digital usage.

Sociological Impact of AI

The growing prevalence of AI in daily human life has the potential to bring about significant socio-anthropological impacts, some mostly more negative than positive. Automation replacing human resources and the resultant job displacement has been a dialogue the world has been perennially grappling with since the very invention of computers. 

Beyond the tangible, what the world fails to notice is the dangers of the potential intangible damage that lurks beyond our naked eye. In the conscious efforts of AI to mirror society to perfection, AI algorithms are trained on existential large data sets which could perpetuate and even amplify existing biases and discrimination in society, such as racial or gender biases. The teams that develop AI systems are not always diverse, and this can result in blind spots and biases in the design and development process. Lack of algorithm transparency and feedback loopholes simply add fuel to the fire. This would eventually lead to an adverse uphill battle of unfair treatment and discrimination against marginalized communities. 

Hikkikomori. Source: Toshifumi Taniuchi/ PX3.fr

High dependency on chatbots and AI-powered virtual assistants furthers social isolation issues and lack of human connection. Hikkikomori, a japanese term, refers to the pathological conditions of extreme social withdrawal and isolated confinement, which is highly prevalent amongst Japanese youth. A Japanese government survey in 2019 revealed that nearly over half a million socially isolated individuals belonged to the 15 to 35 age bracket. While technology is not the pinpointed reason for the genesis of Hikkikomori, research points out that the excessive digitization and growing lust for the virtual world have certainly offered a convenient helping hand in its aggravation. 

Neo-Governance Policies with AI

Neo-governance policies regarding AI focus on new forms of strengthening legal framework to ensure the safe and responsible development and deployment of AI technologies.  

The European Union (EU) has developed a set of guidelines for the ethical development and use of AI. The EU has also proposed regulations that would require companies to obtain approval for the use of high-risk AI systems, such as those used in healthcare and transportation. Additionally, the EU implemented the General Data Protection Regulation (GDPR) in May 2018 to protect the privacy of EU citizens. 

The US government implemented the American AI Initiative of 2020 as a law on January 1, 2021, which aims to promote AI research and development, and establish policies that ensure the safe and responsible use of these technologies. 

The Algorithmic Accountability Act was a bill that was introduced in the US Congress in 2019, which mandates companies to assess and address the potential bias of their AI systems. It would also require companies to provide explanations for any automated decisions that affect individuals.

The Montreal Declaration for Responsible AI, Singapore’s Model Artificial Intelligence Governance Framework, Japanese Society for Artificial Intelligence’s Guidelines for AI Ethics, to name a few, have also made attempts to conceptually articulate technical standards which could potentially address these concerning challenges of AI based bias, privacy  concerns, and ethical issues. 

While these legal frameworks form a good theoretical base to refer and build on, there is still a long way to go in ensuring the practical implementation of devising technology under a globally compatible standard of holistic safeguard measures. 

Where do we go from here? 

As we continue to witness rapid advancements in AI and digital services, it is impossible to deny the significant impact they will have on our future. While some may fear that these technologies will lead to the loss of our humanity, it’s important to remember that today we still have the power to shape their development and use. 

Investing in research that explores the ethical implications of AI and promotes transparency in its development, is crucial to ensure that these technologies are used for the greater good. 

Additionally, we must cultivate a mindset that embraces the symbiotic relationship between humans and AI. This includes recognizing the unique abilities and limitations of each to synergize and achieve mutually compatible goals. By doing so, we can ensure that we not only survive, but thrive in a world where AI and humans coexist harmoniously.

Shweta Ravi
About Shweta Ravi 12 Articles
Shweta hails from Mumbai, India and is pursuing her Masters in International Trade Finance and Management at Yonsei GSIS. Her prior background in Psychology coupled with vast travel experiences and interest in languages, intrigue her to explore the dynamics of human interaction in diverse socio-economic canvas. Shweta has also worked as a professional Language Interpreter in Mandarin (Chinese) & Korean for both the Indian-Korean Government and for several Global Corporates. She is also a Latin dance aficionado with specialized training in various styles of Salsa, Bachata and Afro-Cuban dance forms.