Satya Nadella, CEO of Microsoft, recently was interviewed by Ludwig Siegele of The Economist about the future of AI (artificial intelligence) at the DLD in Munich, Germany where he spoke about the need to democratize the technology so that it is part of every company and every product. Here's an excerpt transcribed from the video interview:
What is AI?
The way I have defined AI in simple terms is we are trying to teach machines to learn so that they can do things that humans do, but in turn help humans. It's augmenting what we have. We're still in the mainframe era of it.
There has definitely been an amazing renaissance of AI and machine learning. In the last five years there's one particular type of AI called deep neural net that has really helped us, especially with perception, our ability to hear or see. That's all phenomenal, but if you ask are we anywhere close to what people reference, artificial general intelligence... No. The ability to do a lot of interesting things with AI, absolutely.
The next phase to me is how can we democratize this access? Instead of worshiping the 4, 5 or 6 companies that have a lot of AI, to actually saying that AI is everywhere in all the companies we work with, every interface, every human interaction is AI powered.
What is the current state of AI?
If you're modeling the world, or actually simulating the world, that's the current state of machine learning and AI. But if you can simulate the brain and the judgements it can make and transfer learning it can exhibit... If you can go from topic to topic, from domain to domain and learn, then you will get to AGI, or artificial general intelligence. You could say we are on our march toward that.
The fact that we are in those early stages where we are at least being able to recognize and free text, things like keeping track of things, by modeling essentially what it knows about me and my world and my work is the stage we are at.
Explain democratization of AI?
Sure, 100 years from now, 50 years from now, we'll look back at this era and say there's been some new moral philosopher who really set the stage as to how we should make those decisions. In lieu of that though one thing that we're doing is to say that we are creating AI in our products, we are making a set of design decisions and just like with the user interface, let's establish a set of guidelines for tasteful AI.
The first one is, let's build AI that augments human capability. Let us create AI that helps create more trust in technology because of security and privacy considerations. Let us create transparency in this black box. It's a very hard technical problem, but let's strive toward saying how do I open up the black box for inspection?
How do we create algorithm accountability? That's another very hard problem because I can say I created an algorithm that learns on its own so how can I be held accountable? In reality we are. How do we make sure that no unconscious bias that the designer has is somehow making it in? Those are hard challenges that we are going to go tackle along with AI creation.
Just like quality, in the past we've thought about security, quality and software engineering. I think one of the things we find is that for all of our progress with AI the quality of the software stack, to be able to ensure the things we have historically ensured in software are actually pretty weak. We have to go work on that.