the_current_state_of_intelligent_systems_2-3

The Current State of Intelligent Systems

So much of what is written about the future of Al amounts to broad philosophical statements, often about the nature of humanity. knowledge and life itself.
Q&A

Avia, an AI-powered music composer, suggests algorithms are capable of remarkable creativity. Google Translate’s invention of its own interlingua—the ability to translate between two languages it has not yet been trained to interpret—may lead some to fear the machines are on the brink of autonomy.

We asked Sheldon Pacotti, Senior Solution Architect at frog Austin, to unpack some of this complexity for us. Here he shares his view on the current state of AI in product design, its common misconceptions and a look at where we’re going next.

Q: Why does AI spark such philosophical debate?

A: What’s unfortunate—but fun—about AI is that it’s in fact deeply entwined with the field of philosophy. Analytical philosophy, from Hume to 20th century figures like Donald Davidson, has had a direct impact on how truth is represented in “symbolist” AI systems, for instance. As a computer scientist, I would say that W. V. Quine’s theory of “observation sentences” and Jerry Fodor’s Language of Thought both read like software architecture.

It’s hard to imagine creating intelligent human-machine interfaces without considering epistemology, phenomenology and other -ologies developed by abstract thinkers. Complicating things further, a half-dozen other fields, including neuroscience, mathematics and cognitive science, are also core to the advance of AI. This situation makes it much more difficult to predict when we will devise the “right” theories of intelligence, and in what order these theories might appear. To build a thinking machine, you need a good theory of thought, perception and reasoning.

In design, through the practice of cultivating empathy, we routinely apply this kind of thinking to users. What is changing is that we must now consider how our creations perceive us, since they will be participating in our lives, communities and society at large. This quickly slides into philosophy, but I would say that we want empathy to be an internal organizing principle of any thinking system we design.

Q: It’s impossible to have a conversation about AI without also discussing the role of the data underlying the intelligence. Why does a “deep learning” system need so much data?

A: Deep learning systems run on data that has been collected, curated and engineered in a deliberate way to solve a specific problem. The large quantity of data feeds statistical learning that is very granular—down to the individual pixel for an image-recognition system, for instance. Just as a study involving statistics requires a large sample size to be accurate, “deep” neural nets require an extreme amount of training data in order to learn to decipher features with accuracy.

Q: We hear a lot about the importance of training the data in an intelligent system, which sounds like a repetitive and somewhat tedious process. If you need a million pictures in order to learn what a cat looks like, doesn’t that make you a slow learner?

A: Deep learning, inspired by the convolutional layers of the human neocortex, is just one aspect of cognition. People can learn new ideas so quickly because the human brain contains many other structures that contribute to intelligence, including “thinking neurons” believed to bind themselves deterministically to a single concept. These other structures enable what AI researchers call “transfer learning,” the ability to leverage existing knowledge to quickly learn new concepts. As AI architectures advance, they will acquire some of these features and gain the ability to learn with less data.

At the other extreme from deep learning is the “symbolist” approach to AI, which uses formal logic to create massive ontologies (e.g. OWL  and Cyc) and smart expert systems capable of human-like deduction. Though the symbolist approach is out of fashion and even labeled the “old” approach to AI by many, it reflects a distinct quality of human thought not captured by the generic convolutional neural net. The creation of systems able to learn from sparse data is likely to involve the merging of these two paradigms.

Q: Tell us a bit about the “transparency challenge.” How important is transparency to AI development from a technology standpoint?

A: The “transparency challenge” in AI is an especially difficult problem for mathematically trained systems like neural nets. What deep learning systems provide today are single-function black-box operations. For example, a system may be created to recognize a face from a photograph, or to translate a sentence. When they are integrated into a traditional software system, such as a strategy game, they take on a logical, discernable role, but their inner workings remain opaque. Looking ahead to neuromorphic computers, we can imagine architectures in which all of the steps in a system are learned, including the algorithms for executing these steps. The result could be a new class of spookily intelligent black boxes.

The key to making these systems “transparent” may lie in their working memory—a system analogous to the previously mentioned strategy-game AI. We will be engineering these systems at a high level, defining operations for storing patterns, focusing attention on patterns and so on, so we will have the hooks to trace the flow of execution. We will be able to isolate individual learned operations much as the human mind associates concepts with words. Sub-systems, like vision, might remain opaque, but at the pattern or concept level, we will learn to engineer transparency and even introspection.

Q: Bots seem to be everywhere now, but how much do they really understand? Are they all they’re cracked up to be?

A: Companies keep adding features to their bots, like they would to any app, but we seem to be a long way away from having a natural conversation with a computer. Though these systems make clever inferences using a variety of methods and offer a natural way to perform certain tasks, they aren’t really a part of our world. They don’t have mental models of what it’s like to live a human life. Some might say that this understanding, too, will drop out of big data analytics.

However, many leaders in the AI field believe that true artificial intelligence will need to be based on embodied intelligence. This is the school of thought that a thinking machine needs to be physically sensing and even navigating the world. Though senses may seem like overkill for bots that help us shop, they could provide a unifying “format” for learned concepts, as they do in the human mind.

Q: What will be the next AI breakthrough? 

A: We are likely to see extended periods of “me-too” applications, while the latest systems learn all they can learn, followed by surprising leaps. Today’s deep learning is very good about “understanding” patterns in large datasets, given a narrow problem-area. New neuromorphic designs are on the horizon, however. Spearheaded by researchers at DeepMind, complete computers have been built entirely with neural nets, progressing in sophistication from the neural Turing machine to the differentiable neural computer. Through the inclusion of read-write memory, an attention controller and other features, the latter design in particular has demonstrated an ability to derive algorithms and apply them in new contexts. An age of true “intelligence engineering” may just be beginning.

Q: What will it take to get to the point where AI is not just supporting the human experience, but actually advancing it? 

A: I wish we could lay out a concrete roadmap for AI like the one the nanotechnology community created for their field in 2007. The breadth of AI research, ranging from biology to mathematics to philosophy, would make this extremely difficult, but we can begin by designing our relationship to AI—that is, how we want AI to fit into our lives.

AI shouldn’t be magic. If we don’t know how a model works, we can’t be sure why it fails, and even when it succeeds we are still excluded. Intelligent products are ones we either like, dislike or distrust.

We know already that we prefer AI systems to reveal their mistakes and invite us to correct them. In other words, we want relationships from artificial intelligence, not products. Looking ahead, we can extend this basic etiquette into the principle of transparency, aiming at systems able to both learn and explain themselves simultaneously. Products reflect a designer’s empathy; future AI systems will practice empathy, moment to moment, if we design them to listen and communicate.

Author
Sheldon Pacotti
Senior Solutions Architect
Sheldon Pacotti
Sheldon Pacotti
Senior Solutions Architect

Sheldon is Senior Solution Architect at frog in Austin. Having studied math and English at MIT and Harvard, Sheldon enjoys cross-disciplinary creative projects. He builds award-winning software, writes futurist fiction, creates software architectures for businesses and writes about technology.

Illustrations byAbe Poultridge
Cookies settings were saved successfully!