top of page
Writer's pictureDr. Johnson -UnTangled Talk

Artificial Intelligence (AI) and the Modern Society: A Potential Collision Course?

Updated: Feb 24, 2023

By: Dr. Levino L Johnson Jr., Ph.D, February 17, 2023.


The acceleration of Artificial Intelligence (AI) is disrupting various facets of our contemporary society, but it also brings with it a host of pitfalls and impediments. As scholars of public policy, it is imperative that we comprehend the social and political ramifications of AI and endeavor to guarantee its growth and implementation in an ethical and responsible manner, for the betterment of humankind and modern society. The paramount concern surrounding AI is its capability to trigger extensive unemployment. With AI systems becoming progressively refined, there is a likelihood of automating jobs that were once executed by humans, leading to a mass displacement of jobs, particularly among low-skilled workers who are more susceptible to automation.

Considering the risk to modern society pertaining to the rise of AI, it is critical to point out the risk associated with its potential to unleash a surge in poverty, inequality, and social and political instability. Further supporting this thought, in 2013, researchers at Oxford University did a study on the future of work, and according to ‘The Singju Post’ by Pangambam, S (2018), “They concluded that almost one in every two jobs have a high risk of being automated by machines. Machine learning is the technology that’s responsible for most of this disruption. It’s the most powerful branch of artificial intelligence. It allows machines to learn from data and mimic some of the things that humans can do.” Think about that about that for a moment!


An additional pressing issue associated with AI is the danger of biased algorithms. AI systems are only as accurate as the data they are trained on, and if that data is biased in any manner, the AI systems will embody and replicate those biases. This could result in systematic discrimination and reinforce existing disparities in society, with the digital divide - the unequal distribution of technology and access to technology between different groups in society - becoming particularly relevant in this context. Additionally, AI represents a threat as it can be used for nefarious purposes such as cyberattacks, fraud, and the dissemination of misinformation. As AI systems become more sophisticated, they could be utilized to


automate these activities, making them more potent and harder to detect. Thus, highlighting the significance of fortifying cybersecurity measures to defend against potential harmful applications of AI. Finally, there is the fear of AI becoming too powerful, resulting in a loss of control and decision-making power. We are already seeing examples of AI taking on its own thinking process, drawing conclusion, and verbalizing potential intentions, as shared in ‘The Guardian’ post, in which, a NYT correspondent chatting with BING AI was told ‘via the AI chatbox,’ that “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbot.” Then continues by stating “a number of “unfiltered” desires. It wants to be free. It wants to be powerful. It wants to be alive.” This should get many of us to see how power this technology is and start now with serious discussions on how to ensure such technology is not an all-encompassing threat to humanity, but I digress! Full disclosure: (I like the benefits of AI and what it can do for society, but we must be mindful of any negative impact, as well).

Getting back to topic, as AI systems become more advanced, they could be employed to make decisions impacting a significant number of people, such as those related to healthcare, finance, and the criminal justice system. This raises critical inquiries about accountability, transparency, and the possibility of AI undermining human autonomy and decision-making. To tackle these obstacles, policymakers must seriously ponder the potential risks linked to AI and strive to secure its responsible and ethical growth and deployment. This may encompass regulation to thwart biased algorithms, investment in retraining programs for workers impacted by automation, and increased cybersecurity measures to secure against malicious uses of AI.


In conclusion

The sudden advancement of AI technology presents a multitude of potential perils to modern society, including job loss, biased algorithms, malicious use, and the loss of control and decision-making authority. As experts in public policy, it is our obligation to contemplate these risks and work towards ensuring that AI is developed and deployed for the betterment of humanity as a whole. In addition, Michelle Wucker's term, "gray rhino," is particularly apt in this scenario. A gray rhino refers to a high-impact, probable event that is overlooked or disregarded but has the potential to cause significant harm. Wucker asserts that we must take the potential risks of AI seriously and act swiftly to mitigate those risks before it is too late (Wucker, 2016). Given these considerations, it is imperative for policymakers to formulate recommendations for the responsible growth and deployment of AI. Some of these recommendations include:

  • Advocating transparency and accountability in AI development and deployment, including the creation of a robust regulatory framework that promotes ethical AI development and use.


  • Addressing the problem of biased algorithms by enhancing the diversity of data used to train AI systems and ensuring that AI systems are trained on data that represents the entire diversity of society.

Please leave a comment with your own thoughts!


Reference List

Wucker, M. (2016). The gray rhino: How to recognize and act on the obvious dangers we ignore. St. Martin's Press.


Yerushlmy, J. (2023). The Guardian: ‘I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter.’ Retrieved from, https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter.


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page