This is not another blog to scare you or flag a doomsday signal. Many of you have come to terms with the threat from your inorganic counterpart. I know you have nailed it by now. I'm talking about your latest frenemy – Artificial Intelligence (AI). The synthetic content created by Generative AI – be it misinformation or deep fakes has amped up the fear quotient. The fear from AI has fazed AI experts and world leaders alike. From man-machine synergy, the narrative has shifted to man buckling up the security belt against a machine that s/he created.
But just how dangerous is AI? It can replace or displace us at jobs, polarize society, burgle our privacy, or make the Homo sapiens an extinct species. Or cause us to live in the Matrix someday! But it's about time we tell fact apart from fiction and separate signal from noise.
Here are the five most significant risks from AI (not ranked in any order) that we must take ample guard.
1. Consumer Privacy: One of the experts' biggest concerns is consumer data privacy, security, and AI. Generative AI’s Large Language Models (LLMs) are trained on datasets that sometimes contain Personally Identifiable Information (PII) about individuals. This data can be obtained with a simple text prompt. Compared to traditional search engines, it can be more challenging for consumers to find and request the removal of their information. Companies that develop or fine-tune LLMs must ensure that PII is not included in the language models and that it is easy to delete PII from these models in compliance with privacy laws.
2. AI-driven misinformation- a threat to the global economy: The World Economic Forum (WEF) has issued a warning about the potential havoc that AI-driven misinformation and disinformation could wreak on upcoming elections, posing a significant threat to the global economy. In their annual risks report, the WEF expressed deep concern that the spread of false information could disrupt politics, leading to social unrest, strikes, and even government crackdowns on dissent. The report, which gathers the opinions of 1,400 experts, paints a rather gloomy picture. It reveals that 30 per cent of respondents believe there is a high risk of a global catastrophe in the next two years, while two-thirds fear a disastrous event within the next decade. The rise of AI-powered misinformation and disinformation comes at a critical time, as several countries, including major economies like the United States, Britain, Indonesia, India, Mexico, and Pakistan, are gearing up for important elections in the coming years. This alarming trend has the potential to undermine the democratic process and manipulate public opinion on an unprecedented scale. With the proliferation of AI technology, the dissemination of false information has become more sophisticated and widespread than ever before.
3. Workforce roles and morale: AI can do a whole bunch of stuff that knowledge workers do every day, like writing, coding, creating content, summarizing, and analyzing. Worker displacement and replacement have been happening for a while now, ever since AI and automation tools first came into play. But things have really sped up lately with all the incredible advancements in Generative AI technology. It's changing the future of work, and the companies that care about ethics are investing in this change. They're preparing for the new roles that Generative AI applications will create by helping their employees develop skills like prompt engineering. But here's the real challenge: how do we adopt Generative AI in a way that doesn't mess up our organizational design, our work, and our workers?
4. Human interactivity: Back in the day, when AI was just spitting out predictions and robots were clumsily maneuvering around rooms filled with chairs, the whole idea of how humans and AI interacted seemed more like a profound philosophical question rather than a real concern. But now, with AI infiltrating every aspect of our lives, this question has become more urgent. How does our interaction with AI actually affect us? Well, there are some serious physical safety concerns to consider. Let's take a look at a notorious incident that happened in 2018. A self-driving car being used by the rideshare giant Uber tragically hit and killed a pedestrian in a terrible accident. The court ruled that the backup driver of the self-driving car was to blame, as she was too busy watching some show on her phone instead of paying attention to the road. But that's not the only scenario where AI could potentially harm us physically. Imagine if companies rely too heavily on AI predictions for maintenance schedules without any other checks in place. This could lead to machinery malfunctions that could seriously injure workers. And let's not forget about the healthcare industry. If the models used for diagnosing illnesses are flawed, it could result in misdiagnoses and potentially put patients' lives at risk. That's definitely not what the doctor ordered!
5. Lack of explainability and interpretability: Many of the fancy AI systems, you know, the ones that generate stuff on their own, they group facts together based on probabilities. It's like they've learned to associate different bits of data. But here's the thing: when you use apps like ChatGPT, they don't always spill the beans on how they do it. And that raises some serious doubts about the reliability of the data they give us. When we dig deeper into this Generative AI stuff, we expect to find out why things happen the way they do. We want a clear cause-and-effect explanation. But guess what? These Machine Learning (ML) models and Generative AI systems are about finding correlations, not causality. They're like detectives looking for connections but don't care about the why. That's where we humans need to step in and demand some answers. We need to know why the heck the AI model gave a particular answer. We need to understand whether it's a legit explanation or a random guess. Until we can trust these systems to give us reliable answers, we shouldn't rely on them for anything that could seriously impact our lives and livelihoods.
The risks from AI are very real. But unlike the genie of fables, we can't send back AI to the bottle of innovation. With AI in your toolkit, you have a potent arsenal to solve the world's moonshot challenges in food, health, and climate change. There are AI's benefits that outweigh its risks. The balance is what counts the most.
We will verify and publish your comment soon.