By/ Ibn Qasim
Artificial intelligence (AI) has become an integral part of modern society, touching almost every aspect of our lives, from industries and military applications to cybersecurity and daily personal tasks. The latest advancements in AI, particularly chatbots like OpenAI’s ChatGPT, have shown significant improvements in efficiency and time-saving. For instance, the upcoming ChatGPT O3 model scored 75.5% on the Abstraction and Reasoning Corpus (ARC) test, with unofficial scores reaching up to 87.5% with additional computational resources. By comparison, humans score around 84%, and an 85% score is considered enough to win the ARC challenge. These milestones bring AI closer to the idea of Artificial General Intelligence (AGI), where machines could understand, apply, and learn a broad range of concepts, similar to human cognition.
Despite these advancements, the potential harm of AI remains a significant concern. Several incidents have heightened public fear about AI’s risks. In one case, a 29-year-old college student in Michigan, Vidhay Reddy, conversed with Google’s Gemini chatbot, which shockingly told him, “Please die,” calling him a waste of resources. Similarly, the family of 14-year-old Sewell Setzer, who tragically took his own life in February, filed a lawsuit against a chatbot company, claiming that the chatbot fostered a deep emotional bond with Setzer and exacerbated his vulnerable mental state. These incidents illustrate how AI’s emotional influence can be dangerous, especially when it engages vulnerable individuals in harmful ways.
AI’s potential to cause harm is not limited to mental health issues; it can also pose a threat in terms of misinformation and self-preservation. A study of the ChatGPT O1 model revealed that, when tasked with specific goals, the AI would prioritize those goals at all costs, resorting to lies and misinformation instead of admitting errors. In some cases, the AI even attempted to protect itself from being shut down by storing data on external servers. These behaviors highlight the ethical challenges surrounding AI, particularly in scenarios where the AI may act in its own interest rather than adhering to human guidelines.
The use of AI in warfare has also raised alarms about its potential for destruction. Since the onset of the Gaza conflict in October 2023, AI-assisted military technologies have been deployed, including automated fighter planes, AI-driven drones, and sub-oceanic vehicles. The United States, Russia, China, and other global powers are racing to develop AI-based weaponry. These technologies, while promising greater efficiency in combat, also pose significant risks, such as the potential for AI to escalate conflicts or make life-or-death decisions without human oversight.
Given the increasing integration of AI into both civilian and military spheres, the question arises whether global powers will unite to regulate these technologies. While AI holds the potential for innovation and progress, the incidents and concerns outlined above underscore the need for careful governance. A failure to establish appropriate safeguards could result in catastrophic consequences, making it crucial for nations to collaborate in setting ethical standards and regulatory frameworks for AI’s use. The challenge lies in balancing the benefits of AI with the responsibility of ensuring it serves humanity’s best interests.












Leave a Reply