Business Owner Stories

View Original

Why Artificial Intelligence makes Ransomware Particularly Dangerous

If this is the screen you see when you start up your computer, you are about to have a really bad day…

Last month, I mentioned some eye-opening statistics about ransomware, including cost to small businesses in the past two years. This month, I want to look at ransomware in 2020 and the near future. That future involves machines learning on their own, which is also called Artificial Intelligence (AI).

A scenario that exemplifies the risk with AI

Imagine that you were at a technology trade show where a very humanoid robot was being showcased. It walked up to you and reached out it’s hand as a greeting. As you respond placing your hand in it’s hand, just as you would with a human being, it now needs to make very specific calculations about how hard to squeeze your hand. Keep in mind that this is just a machine.

Now what if, as you squeezed, the robot would recalculate in real-time the force it needs to apply. It might do this based on factors that are completely unrelated to the handshake such as your facial expression, your appearance, or even some historical data in its memory. If this data would change the instruction to now apply three times as much pressure than normal, it could then very simply crush your hand.

In this example, the machine changed its programmed behavior based on new information that influenced the processing as it was performing the action of shaking a hand. This is machine learning. The original designers of the machine most likely would not have programmed this into the workflow of a simple handshake, but if this was something that the machine was programmed to do with all actions, then it could simply apply it to the handshake as well.

That is the power of artificial intelligence. It allows machines to make decisions on their own, without interaction from the operator or original designer. Obviously, there are many safety measures that programmers are required to implement when they design these machines and we are hopeful that these programmers are responsible enough so that robots do not crush the hands of enthusiastic trade show visitors.

What if there were not ethical or moral parameters on programmers?


That is the risk with Ransomware, a program designed to squeeze money out of people through their computers. So far, there has been an unwritten code to the process that ransomware programmers follow. The idea is that if the victim pays the ransom, and then receives their files back, that this would encourage future victims to just pay up rather than ignore the ransom.

So far this has been the expectation and from what researchers have discovered, that code has been followed in the majority of cases. In effect, it is because there is a human on the other end of the transaction, even if this person is a criminal that looks at the process as a business transaction (it isn’t, it’s extortion, but from their perspective it’s just business). In that way, there is a modicum of “ethical” behavior that is maintained. Victims pay up, and the criminals proceed on to the next victim.

What if instead of a human being on the other hand, it was just a computer, that is, a machine?

Using Artificial Intelligence with ransomware

Artificial Intelligence will completely change this “ethical” behavior. Since machines act without the restraints, feelings or remorse that guide more “ethical” behavior, they are free to squeeze harder to maximize the amount of money that they can extort out of an individual.

For example, what if the machine would recalibrate it’s behavior based on such factors as what kind of security software it had to bypass to insert the ransomware, how much it could deduce about the victim’s wealth from public records, or simply by how quickly the ransom was paid? What if, after some time it determined that the victim could easily be squeezed for twice the original ransom request?

In the case of a small company, the computer could determine that the CEO could easily sell off more ownership in the company to pay more of a ransom? Or if it’s a small mom & pop shop, it could do a calculation about the wealth of the mom and pop, including possibly what their children could contribute. The initial ransom might be $3000 for recovery of all the business files, but after that was paid, the machine could ask for another $5000.

Why ransomware and AI are such a dangerous combination

Make no mistake, ransomware is not “ethical” business software. It is extortion, plain and simple. It is created by criminals. Instead of breaking into a home and using pliers to get you to tell you where your money is hidden, they break into your computer, encrypt your files, and hold your business, your clients, and your reputation up for ransom. These criminals do not have much of an ethical code.

Just because it made business sense to keep that “ethical” code before, there is absolutely no reason to believe that given the potential for much greater revenue, they will not allow the machine to do the squeezing, even if it strains the entire “business” model.

Rather than it being just good business, it was lack of technological know-how that kept these criminals from using AI to increase revenues. That barrier is quickly disappearing, and machine learning is more accessible in 2020 than it has ever been before.

That access will only increase. It will not only be used to further squeeze victims into paying more ransoms, but it will also be used to make the software harder to detect. According to the article Machine Vs Machine: A Look at AI-Powered Ransomware, by Gabriel Lando of FileCloud, there is actually an AI arms race occurring between security companies and criminals. The future of AI powered ransomware looks to be a troubled one.

What about the future?

There is a quiet revolution occurring in technology. For this industry to continue to meet the demands of the future, AI will need to make up what humans on their own cannot accomplish. Unfortunately, the exact same reality exists in the criminal world.

The difference between the industry and the criminal world is that the latter has far fewer restraints because it is at its core unethical. This is why AI is so very dangerous when it comes to ransomware: there are so few reasons to limit its dangers in the criminal world.

Unfettered from the ethical restraints of the tech industry, AI-powered ransomware is likely to grow at a much faster pace than the security software being developed to contain it. It is quite possible that in the near future there will be ransomware that is not designed, managed or maintained by anyone. It could develop a life of its own online attacking computers based on behaviors that have little or no business factors involved anymore.

We should all hope that the industry is able to fight this arms race and keep up with this threat. If not, we may all find ourselves shaking the hand of a robot that is not necessarily concerned about our wellbeing as much as its own.

Conclusion

I don’t want to end this article with a total doom & gloom message. I think of security much in the same way that I think of dwindling natural resources, like coal and gas. Before it is all used up, there is much we can do to extend them, perhaps even pushing their end so far into the future that new solutions will emerge that will allow us to continue on without them.

The same is true for computer security. AI can also be developed to keep up with and even overtake the criminal use of AI. This is something that the greatest minds in the world are working on, and I have faith that their efforts will succeed in the end. It will take coordination between governments and industries, and I think it will be in their interest to do this effectively.

While they do what they need to do, there is also much that individuals, small businesses, solopreneurs, and gigsters can do as well. That is what my ransomware article will be about next month. Stay tuned.