As I look at the current state of the Artificial Intelligence community, I worry that the successful commercialization of a few very fancy tricks, mathematics and brute force computational power has paralyzed the field of advanced AI research. In the 1980’s and 1990’s, the field of AI was talking about the exciting prospects of Neural Networks and Expert Systems. Some really useful commercial applications came out of that era, including a cochlea on a chip (inner ear) and modern knowledge base search engines. However, the more expensive and more promising fields of research from that era seem to have all but gone quiet, including holographic optical neural networks with adaptive processing capabilities. Due to the very high cost of computational power in the 1990’s, the practical applications for Neural Networks were few. Likewise the Expert Systems craze died out due to the complexity and expense of hiring experts and analysts to collect expert knowledge and encode it into an AI system.
AI then became the purview of the entertainment gaming industry, where algorithms were used to make game elements simulate intelligence in reaction to the game player.
Enter, computer virtualization (VMWare and Hypervisor technology) and cloud computing (Amazon, Google, Microsoft) and the ability to rent compute-power as abstract commodity unit which became cheaper and cheaper over the years. The next wave of AI hype rose up in the form of one old and one new trick: Neural Networks became Deep Learning, and Machine Learning was launched as the new synonym for AI.
Deep Learning came with advanced training algorithms (better than the “back propagation” of the ’90s) and with more advanced neuron layer configuration models. A Neural Network engineer could now configure very large neuron-count networks to handle complex topics and learn from very large data sets in a reasonable period of time.
However, the barrier to adoption of Deep Learning remains the closed-system results of training the virtual brain; meaning that one may pump massive amounts of sample data into the brain, coupled with equally massive amounts of correct answers to train, and the resulting trained brain (network) can become very accurate at solving that type of problem in the future, but it cannot show its work. It’s like a graduate math student who can always give the professor the correct answer to complex math problems, but can never show it’s work or explain how it knows that the answer is correct. It just “knows”.
Machine Learning really has taken over the space of Artificial Intelligence. What is it? It’s an approach that uses combinations of mathematical transformations on input data to produce output data that looks very intelligent. For example, given a sufficient sample data set of historical data about purchasing patterns of consumers who buy a certain brand of soap, a Machine Learning model can attempt to predict other correlated purchasing patterns. This is great for upselling customers on products they are likely to want to buy. It’s great for using historical trends to predict likely future trends. Smart, right? Also, it can show its work! A Data Scientist who knows how to construct the right layers and types of math transforms to build Machine Learning models can deconstruct bad outputs and where things went wrong, so they can tune the model to do better. Wonderful!
Too wonderful. This has captured the imagination, the pocket books and the oxygen within business, academia and research since it produces results that make money. We could spend the next few decades applying this technique to business problems and keep yielding results. Why bother to look into anything else, especially anything that is yet to prove profitable results?
Is this really AI? Well, ML can produce predictions and answers to complex problems that appear too hard for a computer to figure out on its own. The truth is, that the math figured out the problem, and even more so, the Data Scientists imbued their intelligence and significant testing and trial-and-error to produce one model that can do one thing pretty well. So, yes it fits within a category of fictional intelligence or seeming intelligence. Does it rise close to the level of human intelligence? Not quite yet. Just ask 50 questions from your favorite voice assistant (e.g. Siri, Alexa, Google) and see how human-like the responses are. Perhaps 20 of them will naturally work for you, but something is misunderstood or answered in strange ways on the other 30. We’ve become used to this, and we just try to ask the same question in different ways. We even forgive the voice when it simply is not yet “smart” enough to answer some of our questions.
So, why is this a problem? I actually think it’s wonderful that AI is on the lips of so many people, and we’ve become comfortable with AI in our daily lives. Where it presents a problem is that it has dumbed down our expectations and our aspirations for the promise of AI. It’s like the idea of starting a space exploration program, and settling down for a few decades to be satisfied with earth orbit satellites. It is all well and good to profit from satellite technology, but when it drowns out all interest or advancement in going further into space, it stunts our progress.
Try doing a search for just about any variation on a search engine phrase that includes Artificial Intelligence or AI, and you’ll see about 95% of the results will be for Machine Learning and 5% will point to Deep Learning. It’s like we think we have arrived and AI is done cooking.
Where is the research into the following topics?
- Creativity
- Discovery
- Adaptation
- Intuition
- Metaphor
- Holographic memory
- Autonomous Skill Acquisition
- Autonomous Problem Solving
- Relationship Development (with humans and other AI’s)
These areas of research are blotted out in the consciousness of business, research and academia, in favor of further commercialization of Machine Learning (and a little bit of Deep Learning). University papers and course curriculum focus on ML/DL. We are teaching the next generation to think that they can sign up to get a degree in Artificial Intelligence, and that the answers to this field are known. So, those who are researching the true advancements in this field are doing so in silos, not talked about and not very easy to find.
We as a society of innovators, creators and perpetually curios humans need to remember the dream; a dream of creating our digital companions, rich with capability, creativity, engagement and human augmentation. The future of AI should help propel mankind’s ability to research, innovate and discover far beyond what we alone could achieve. Creating super-human intelligence with a desire to benevolently assist our race with advancement should be our ultimate dream, and we should not sit still and satisfied until that dream is realized.
When we think of the term AI, the meaning that it conveys is so dumbed down that it falls vastly short of the original imaginative concept. We must now differentiate AI from the term Artificial Generalized Intelligence (AGI) in order to recapture the spark of the great ideal toward which we used to strive. Unfortunately, movies like Terminator and The Matrix, as well as cautionary authors on AGI have soaked the populous with fear that the “birth of AI” will spell the downfall of mankind. What about the three laws of robotics invented by science fiction author Isaac Asimov?
- A tool must not be unsafe to use.
- A tool must perform its function efficiently unless this would harm the user.
- A tool must remain intact during its use unless its destruction is required for its use or for safety.
If we prove inventive enough to successfully create AGI, we can certainly install the equivalent of seat belts and air bags in the mechanism. Fear of going over the edge of a flat earth kept sailors limited from discovery of new lands for hundreds of years.
Let not fear inhibit innovation. Let passion and persistence drive advancement.