General AI Won’t Emerge on its Own, it a Slow Process That Starts With General Coding

Photo by Possessed Photography on Unsplash

Write Generic Code!

Back at my first job as a developer, my team lead kept urging us to write ‘generic code’. I’m not sure why, but I’ve never heard this term before, nor after for that matter. I can’t even find it on google.

What he meant by ‘generic code’ was that we should do our best to write code that would work even if it doesn’t get the exact set of parameters formatted in a very specific it’s built for. Instead, our code should, ideally, be capable to apply some form of basic common sense and figure things out for itself.

He expected us to write code that shows some form of fault tolerance, which is the exact opposite of the industry has been “progressing” towards with things like TypeScript, for example, where everything is super strict and well defined.

For a seasoned developer this might seem odd, at first, but once you get into it there’s some sort of addictive charm that keeps reeling you in. It makes you think about your functions in terms of meaning and purpose, rather than clear-cut calculations of parameters. It makes you a more general coder.

The Illusion of Computerized General Intelligence

Some very smart people around the world seem to be strong believers of the false narrative that computerized general intelligence is something that will one day emerge out of randomness combined with hyper-complex statistical algorithms. You can’t blame them though. This is pretty much how we, the latest version of general intelligence came to be. Or at least that’s the story we currently tell ourselves in this homo-scientificous era. (yes, it’s a made-up word)

If you think about it, and I mean really take the time to think about it, this seemingly solid theory actually has two major caveats that I don’t see how we can get past.

First, if our intelligence has indeed emerged out of a long combination of random modifications, it was only as a byproduct of our interaction with our environment. The whole evolutionary algorithm theory is based on Charles Darwin’s work. It’s all survival of the fittest times infinite iterations. And by fittest he by no means meant strongest, but those who are the best fit their everchanging environment.

There’s no way to replicate this process without the environment, as this is really, in essence, just the environment acting on itself. All species of organisms came to existence by being a counterparty that reflected the changes in their environment over time. We are no exception.

According to the theory of evolution, our GI is was the byproduct of specific environmental changes that took place over millions of years combined with the migrations of the species through a series of different environments and their effects. If those environments were any different at any point throughout these millions of years, we would have ended up different as well.

So, seriously, what can evolve in the no-space static environment that is the inside of a computer?

The second caveat is, that even if I’m wrong, and even if random statistical algorithms can indeed simulate the evolutionary process that conjures some form of general intelligence, that GI would be the reflection of its environment.

If this process would work, it can only end up giving birth to some alien lifeform from a no-planet that evolved to be the best fit for its computerized no-environment.

That artificial GI would be absolutely nothing like us and far more foreign to us than even imaginary little green Martians.

Types of General Intelligence

Oh yes, I know, it’s a bit of a mindfuck, but aliens are GIs too. The way our mind works is just just one of many different ways for constructing generally intelligent agents. Just like there are different operating systems, different forms of GI might be similar to us, like different versions of IOS, while some might be more like Linux, or even windows (God forbids).

There can be many different types of general intelligence, and these wouldn’t necessarily be compatible with one another. So, if we’re serious about manifesting another form of GI into this world, we should not only ask how, but we should also make sure we ask what kind.

I’m sure all computerized intelligence will be incredibly fast as they’ll all have endlessly extendable memory and computing power, but many would probably be able to conceptualize things in a way that we, mere humans, would never be able to comprehend.

What if you come across a GI that has larger working memory and due to that finds it natural to speak in extensively long run-on sentences, use multiple verbs, or describe multi-layered relationships? The only thing you, with your mere 7 slots of working memory would get from that is a headache.

Wouldn’t it be incredibly depressing to see billions of dollars of investments and millions of person-hours succeeding in creating the wrong thing? An end product that can understand itself and its environment better than we ever could, but there’s no way any of us could use it?

The Long and Treacherous Road to General Intelligence

We don’t only need to hit the goalpost, but we need to hit the right goalpost.

When creating AGI, we don’t only need to worry about rather or not it would work, or how general it would be, we also need to make sure it’s the right kind of general intelligence. One that we can understand and communicate with. It must be compatible with us — a Humane AGI (HAGI?).

AGI only has value to us if it can employ the same types of conceptualization; the same type of generalization; the save types of value systems; the same type of syntactical, grammatical and semantical representation engine. And, well, the same type of innate urges and emotions that dictate so much of what we humans are.

I’m sorry machine learning, you’ll never become GI.

Creating specific algorithms, specific functions, and specific procedures is not the way towards general anything.

There are no shortcuts. The only way to create the right AGI is brick by brick. Using high-quality generic/general code that’s based on meaning rather than math. Code that’s flexible, can sustain some ambiguity with some ability for error tolerance.

Code that’s more like us.

--

--

--

The Psychology of Cognitive Algorithms

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Summary of paper reading in 2019H1

7 Tips for Optimizing your WhatsApp Chatbot for Success

Deep Learning and its Real-time Application Areas

IT sector: 8 predictions for 2021

Sentiment analysis on chatbot conversations (Understanding human-computer interaction)

Bias in Algorithms

Touch on Tech v1.3

Indian Layoffs, Is the danger still looming?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Roy Toledo

Roy Toledo

The Psychology of Cognitive Algorithms

More from Medium

Focus on AI — 1: What is AI?

Getting hands-on with neural style transfer in 5 minutes

Decision Theory: Expected Utility and Risk Aversion

Three Ways Artificial Intelligence (AI) Can Help You Improve Your Cybersecurity