Telephone On-Hold audio blog
Is AI Voice Phishing the Next Big Security Threat?
Artificial Intelligence (AI) refers to an infinite number of innovative possibilities. But, what about the flip side of AI? One important downside of it is the threat it brings to cybersecurity.
Many companies are now implementing voice technology, on a large scale. Soon our phones, apps, smart devices, cars, banks and even offices will identify consumers by their voice. We are moving towards a conversational economy. While we'll reap benefits such as increased customer satisfaction, there is also an economic opportunity for fraudsters.
Deepfake technologies use machine learning and artificial intelligence to create a synthetic human voice or even video. This model uses an existing real person’s voice and generates an imitation so good that it fools people.
All that's needed is five minutes of someone’s audio to create a realistic clone. As the audio length increases so does the quality of the cloned voice, creating a result not perceptible by humans.
AI-based voice attacks are becoming a growing security threat. Imagine receiving a call from a strange number but hearing a family member’s voice on the other end of the line. Would you trust them to do what they ask? Most likely, isn’t it? Now imagine this fake voice on the other end is your boss’s voice. Employees do whatever their CEO, boss or manager asks. So, imagine all the fraudulent implications this brings.
Insurance, retail, banking, card issuers, brokerages and credit unions are currently the industries facing the highest fraud risks. The total fraud identified between 2013 and 2019 added up to around 1.15 billion dollars. Not to mention that fraudulent calls are increasing year over year. In 2013 315 thousand calls were identified, whereas in 2019 this number had increased to 470 million calls.
All in all, companies will have to overcome a lot of challenges to provide the right solutions to ensure fraud is detected and prevented properly.
Also, let’s not forget that AI Voice Phishing is not the only threat that AI brings to the table. Even though it has a massive positive potential it also comes with many threats to security.
Elon Musk and other experts have already expressed their concerns on AI development outpacing the human ability to manage it in a safe way. Elon, differentiates two types of AI. Case-specific applications of machine intelligence (such as self-driving cars) or general machine intelligence, which has an open-ended utility function. This second type of AI has a million times more compute power.
Even Elon Musk is not an advocate of regulation. But, as most experts would agree, this is a situation bringing a serious danger to the public. Ideally, we should be proactive rather than reactive when it comes to AI regulation. Especially considering that regulation is far behind the curve when it comes to AI development.