AI: Artificial Ignorance

Jake Moore, Cyber Security Specialist for ESET discusses does true Artificial Intelligence even exist yet or will it ever exist or will it end the world before we reach its full capacity?

Jake Moore, Cyber Security Specialist for ESET

The hype around Artificial Intelligence (AI) is currently a media frenzy and if we aren’t careful, we will ruin the name before it has had a chance to really prove itself due to a lack of knowledge around it. AI is a beautiful concept of futuristic computing that the tech industry and academic research is leading in a way that may one day see dramatically enormous changes to the way we live our lives and pivot the human race into a new digital era.

But for now, AI is simply misunderstood. Computers are not yet thinking for themselves, nor are they able to live on their own and no, the Terminator is not hiding around the corner looking for John Connor… yet.

You’d be forgiven for thinking AI currently exists for the amount of media attention it attracts. People desperately want to believe in AI and hope that the next generation of software uses it to its full advantage. It has ubiquitous influence, however, and as sad as it may be to admit, I think we are still a few generations off it becoming mainstream.

Take truly autonomous cars for the masses, for example, which is a wondrous concept but for now this is just awesome science fiction. This doesn’t mean it won’t ever happen, it just means we are still way off from it ever taking off. To be able to produce a completely autonomous car sounds impressive but with the technological advancements required, the essential and seemingly infinite amount of calculations at incredible speeds, and not to mention a horrendously dangerous transition phase whilst autonomous cars mix with standard cars, it remains a distant dream for now.

Some seriously difficult mathematical problems that are hard to crack via computing alone, such as image recognition, end up developing an aura of magic around them. We currently tend to imagine that only AI could hold such an ability. Yet once we go and solve such a vast problem churning through even more data accurately finding the answer we actually find that it’s just good computer engineering and not very ‘artificial’ or even that ‘intelligent’ – it’s just simple consistent advancements.

So, what really is AI?
Well, true AI is typically known for being able to teach itself things such as games or even learn to anticipate moves of opponents within games. Better still, to quote Wikipedia, true AI is a “hypothetical machine that exhibits behavior at least as skillful and flexible as humans do”.

I am just not convinced that we are ready to call our computer power AI yet, however impressively powerful it is.

Machine learning (ML), however, is making headway in one of the most exciting technological developments in history and should not be confused with artificial intelligence thinking for itself. Humongous amounts of data churning through the processing wheels of machines is creating wonderfully accurate predictions and able to solve incredibly complex algorithms faster than ever before. But it is yet to do this for itself or mimic the brain of a human.

Machine learning is unequivocally constrained by man-made rules and these rules have even been known to contain a disappointing reflection of human biases such as racial, sexual and gender bias, making it fail before it has begun. Sadly, most of what we come to know and believe can be based on personal biases in our brains. True AI, however, is limitless and has the possibility of doing anything and, if taught correctly, will be fair and without prejudice.

ML is, without a doubt, changing our lives and making our lives more streamlined. From image recognition, to prediction of crime, even to medical diagnosis, the increased computer power is phenomenally and rapidly increasing our accuracy in multiple industries. Google, IBM and a handful of start-ups are all racing to create the next generation of supercomputers. If quantum computers ever take off, they could potentially help us solve extremely complex processes which our current computers can’t even begin to solve in less than a millennium.

If anything, AI remains a few decades away and we should avoid using the futuristic term for now or we will simply be doing all the current great technological feats provided by machine learning a disservice by making claims that are presently false. Let’s not forget how far we have come into this current digital age and enjoy the journey into the next-gen digital era, however artificial it is.