Hallucinations, Disruptions, and Innovations: How AI Redefines Boundaries
Are you ready to navigate the complex landscape where AI challenges reality and reshapes our future? In this episode of "AI Experience," I sit down with Amr Awadallah, a visionary in the realm of artificial intelligence and the founder of Vectara. Amr brings to the table a wealth of knowledge from his groundbreaking work in the industry, including pivotal roles at Google and Yahoo. Join us as we delve into a conversation that transcends mere speculation, where Amr demystifies the phenomenon of AI hallucinations and elucidates their implications for data interpretation and decision-making processes. We'll explore the disruptions AI introduces to traditional industries and how these challenges are not just obstacles but catalysts for unprecedented innovation. This episode is an invitation to peer beyond the boundaries of current AI applications and consider the vast horizons of possibility that lie ahead.
Amr Awadallah is a trailblazing serial entrepreneur, investor, and founder of three companies: Aptivia, Cloudera, and Vectara. Amr also served as VP of Engineering at both Google and Yahoo. In these roles, he was pivotal in some of the companies’ biggest initiatives, including shaping the future of cloud technology at Google and Big Data machine learning at Yahoo. During his time as Global CTO of Cloudera, he developed the concept "schema-on-read vs. schema-on-write" that transformed how organizations think about utilizing their data in more agile ways.This year, Amr's passion for ML and experience as a tech founder converged in Vectara, a Generative AI company competing with the likes of Cohere and OpenAI. Tackling concerns around GenAI misinformation, bias, and copyright infringement, Vectara's newest model, the hallucination evaluation model, addresses the accuracy and hallucination in AI frameworks.
Amr Awadallah
Entrepreneur, investor, and founder
Julien Redelsperger: “And I am happy to welcome Amr Awadallah. He is a serial entrepreneur, investor and served as a VP of engineering at both Google and Yahoo. He's passionate for machine learning and experience as a tech founder, converged into Vectara, which is a generative AI company whose goal is to mitigate hallucinations and copyright concerns created by generative AI. Thank you for joining me today. How are you, Amr?”
Amr Awadallah: “I'm doing great. It's my pleasure to be here.”
Julien Redelsperger: “My pleasure, too. So to start off, could you please just tell me why do you think now it is a good time to know more about generative AI? Why does it matter?”
Amr Awadallah: “Yeah, I think all of us need to learn more about generative AI and learn about how can we embrace it in our daily lives, just as consumers, but also as knowledge workers for our work as well. So why is that? The reason is very simple. We are now at an inflection point where we finally cracked how can humans and software work together in the same way that humans work with each other, meaning we just tell each other what we expect of each other and it happens. We're getting very, very close to having that become doable for everything around us. If you go back 60, 70 years ago when computers first started, it was very hard to use them. You had to use something called a punch card, literally a card where you punch holes in it to program the different things you want to tell the computer. Obviously, very few humans knew how to talk to the computers in that way. And then the keyboard came around and the keyboard made it easier. Now we have a keyboard I can type on, I can actually speak to, I can program in a language in English, and that made it a bit easier. Open dock consumption to a few more humans, but still very hard to use. And then came the mouse and the mouse made it even easier. Now we have a Windows environment, I have a menu with all the commands, it's easier to interact with the software that way. So that made things a bit better, but still not ideal. And then after that came the touchscreens, things like the iPhone where we can use our fingers to swipe up, swipe down, pinch in, pinch out. I joke and say even my mom now is an expert app user because of the iPhone. But still, that's for the simple tasks, not for the complex tasks. Like imagine a complex task of you took a picture of your family, but you forgot one of your kids in the picture. To add that back with Photoshop on your iPhone, you have to go read the documentation, maybe look at a couple of YouTube videos. It is a complex task. Now we are finally at the stage where we can just tell the software what we want. You can literally just tell the image processing app, add my kid back in the photo, and the kid just shows up. Or remove my kid from the photo because I'm upset at him right now and the kid disappears. So you're able to do much more complex things. That's just a simple example, obviously. But the key point is we are now finally at the stage where we'll be able to do with software very, very complex tasks without having to be experts in the software. My mom not only will be able to use apps, she'll be able to build apps. She'll be able to describe an app and say, "I want an app that looks like this." When you click this button, that happens. When you swipe right, this happens. When you swipe left, that happens. The colors should be here like this, here like that. And the app comes out from the other end. So that's why we need to really, really understand what's happening with Gen AI right now.”
Julien Redelsperger: “So generative AI is like a big change that we've been talking about for about, I would say, maybe a year, year and a half since the release of ChatGPT. You have a long history in the tech industry. Did you see ChatGPT coming? Were you surprised when it was released back in November 2022?”
Amr Awadallah: “No, I personally was not because I am in the industry. And I was working at Google as well in 2020. So that's a couple of years before ChatGPT came out. Google internally had something called Meena. And Meena, which now is the more modern version, you will see this called BARD that Google launched, already had all of this stuff that ChatGPT showed us a year ago. By the way, ChatGPT is only one year old plus a few weeks. We've seen that within Google. We were able to chat with this thing that is super smart, that can write code, that can solve problems, that can do math. And yes, it had hallucination issues. And Google was very cautious about whether they should launch something like that. So open AI was a bit more brave, but also that created some damage. Where now there's a lot of people more skeptical of AI because of all the hallucinations or infactual information they have seen from it.”
Julien Redelsperger: “So when we use ChatGPT, you know that it sometimes can create like fake content, fake information. That is called hallucinations. It can fabricate information, data, names and dates, and even historical events that actually never happened. Why is that and what can we do about it?”
Amr Awadallah: “Yeah, it's very simple. We do it too. Like we humans, we do the same thing. So it all goes back to the how much information can you store in your brain? Right. Whether that brain is a digital brain like ChatGPT or a human brain like our brain. There is a limit of information on how much we can store that just like natural. And when we were at school, we had two ways to take exams. There is something called a closed book exam and there's something called an open book exam. In closed book exams, you're not allowed to bring the book with you to the exam. You have to remember all the chemistry formulas. You have to remember all the body parts from biology or the integration methods from algebra. You have to remember all of that from calculus. I mean, you have to remember all of that in your head. And then you will go in to solve the exam where you now have two things. You have the comprehension aspect. I understood the facts. I understood what they mean. But then there is the retrieval or recollection aspect where I need to remember how it was exactly said so I solve it in the right way. And then when we're solving the exam, there will be some questions where we'll have perfect recollection and we'll be able to solve them without having to make up stuff. But every now and then we'll see a question, "Well, we think we partially know this. I should still answer it because I don't want to get zero in my grade." And we end up making up stuff as we're answering that question. But that stuff is being made up with a very well-educated guess about it, right? Like because we remember the wisdom we extracted from our learnings and now we're trying to guess the best answer. Our intention is not to fabricate. Our intention is not to make up stuff. Our intention is to try to answer the question to the best of our guessing ability. Maybe put it that way. That would be a very good way to say it. And that's what we do in closed book exams. In open book exams, we have our book there with us. And that makes a huge difference in terms of how we solve these exams because now I'm focused more on understanding the problem very well. And then if I can't remember something, I can open the book, look it up, and make sure I got it right. So that's exactly the same reason why hallucination happens in large language models. So to make it very simple, a large language model has a fixed size. Let's say the size is 100 gigabytes. Okay, let's say the size of the model is 100 gigabytes. So it can store information worth 100 gigabytes compressed. You can compress down the information, of course, before you put it in there. Now, when we're training, and by the way, this usually will hear the size of the models, you will hear something called the parameters, the number of the parameters, the 7B, Lama7TB, Laban7B, that's really the size of the model. That's how big the model is and how much you can squeeze in it. But then there is the training data. How many books did you feed it? Right? Like us, when we're trying to study at school, how many books did you feed our brain? So how many books did you feed it? The books are way more. So if you look at the books coming in, we call them tokens. We call all of these words coming in the tokens. If you're training it on a trillion tokens or 10 trillion tokens, and you're compressing down the 10 trillion to only 10 billion, which is the size of the model, then that means you're compressing down data a thousand times, a thousand times the compression of it. So clearly when you compress so, so much, you're going to lose stuff. You're not going to have all of the original data that was in the books in the model. You're going to have the same thing like our brain does. You're going to have the simplification of it, the digestion of it, the wisdom of it. And some of the words from it will still make it, but not all of it, because it doesn't have space to store all of it. And that's exactly now why, when it's now trying to answer a question to the best of its ability, it's trying to recall information. But every now and then there will be gaps because of how much it compressed down the knowledge, and it will make up stuff in the same way that we made up stuff at school. And that's what we call hallucination.
Julien Redelsperger: “As a human, if we don't know the answer to a question, we have the wisdom to say, "I'm not sure," or "I'm not entirely sure," or "I don't know the answer to your question." But just GPT or generative AI won't say that. It's just going to answer the questions without actually knowing…”
Amr Awadallah: “You are right. By the way, they can say it. That's something that AI gives you a score that can roughly tell you how confident it is in the output. And they should start adding capabilities like that. So one capability I like, for example, in Google Bard, if you use Google Bards as well, Google Bard, after it gives an answer, there is a Google button at the bottom of the answer where you can say, "Double-check this answer for me." And you would click that, and then it would highlight in the paragraphs, "Oh, we have very high confidence this was a good statement." "And this one, we're not sure. This might be made up." "So you need to take this with a grain of salt." And I think we need more of that. Like, ChatGPT should tell you when it gives you a response, "I'm 100% confident." "This is all very factual and correct," versus, "I'm 90% confident. There is a 10% chance something might be off." So I think ChatGPT actually should evolve to give us that signal so we know when can we depend on it entirely, and when, no, we need to double-check this response before we paste it in an email or put it in a podcast.”
Julien Redelsperger: “Do you think it is a good idea today to have people rely on generative AI for some specific information, like medical advice, legal advice, to make decisions? Is it a problem for you, or is it just going to be the future of search engines, for example?”
Amr Awadallah: “ We have to be careful about it. First, you have to keep in mind, people relied on Google for this stuff all the time. They would search Google and look at whatever answer in whatever document Google gave them. And we don't know that the answer in that document was written by a proper doctor that truly understands the symptoms you have, or some random person. And then people would look at that and go to their doctor and say, "Oh, I read this on Google." And then the doctor would tell them that was not true. So the same thing applies to AI as well. We shouldn't take anything that comes out from ChatGPT as correct, and actually exercise that option and take that medicine or do that thing. And they will tell you this. Even chat-gpt will tell you this when it gives you the response. "Oh, this seems like a medical question. I'm giving you a suggestion right now, but you should still take this suggestion and run it by a professional lawyer, or run it by a professional accountant, or run it by a professional doctor." Because they know that there is a chance that suggestion might be the wrong one, and they don't want the liability of you following that advice and ending up in trouble. So they disclaim it that way. But at the same time, we have to keep in mind that we have a scarcity of knowledgeable people in the world. We have a scarcity of doctors. We have a scarcity of lawyers. We have a scarcity of accountants, etc., etc., etc. So by having this intermediate step where I can still find an initial answer until I get the professional answer, because I'm taking a long time to get the real doctor, I think that is very useful. And then by using these tools, over time, all of us, OpenAI, ourselves at Vectara and other companies, we are working on them to make them better and better in terms of the accuracy. So I think this is an intermediate problem, but absolutely, in the short term, people should be very cautious about the responses they get back. I agree with you on that point.
Julien Redelsperger: “As I was saying as an introduction, you are the co-founder of a startup called Vectara, whose mission is, I quote, "to help the world find meaning through search." Could you please tell us more about it? And what do you do actually at Vectara?”
Amr Awadallah: “ Yeah, so what we do in layman terms is ChatGPT for your own data. So how can we build AI assistance for everything around us that is grounded in the knowledge, information, documents, and data that we have? So for example, I'll give you an actual example from one of our customers. This is a company called SonoSim. So SonoSim, what they do is they have very rich data sets, lots of documents around how to use ultrasound machines. So when you use an ultrasound machine to scan your liver, scan for pregnancy, scan your heart, scan a muscle, you have to calibrate the machine differently, and you have to know which model of machine and which manufacturer, so you can calibrate it correctly. And then you have to aim the scanning device, the scanning gun, that you hold to scan the different parts in a different way, depending on what you're trying to get at and which disease and the type of the patient, their age, their ethnicity, whatever. So only the expert radiologists know all of this stuff, right? And they know how to configure the machines correctly. The average radiologist always struggled with this. So SonoSim, they collected all of the manuals of all of the machines. They collected the best practices for how to, documents about best practices for how to do this, depending on the patient. And then they simply uploaded all of these documents into our system. And now they have a very smart AI assistant like ChatGPT, that the radiologists can use in real time and just ask it, "Hey, I'm scanning for pregnancy right now, for a woman of this age, and this is nasty, what should I do?" And the assistant will reply back and say, "Based on the model of the machine you're using right now, these are the perfect settings to put in place, and this is how we should aim your device." So now the average radiologist overnight became an expert radiologist by having an assistant like this in their hands. And that's the pattern.”
Julien Redelsperger: “And that works on any device, like a radiologist with an iPhone or an iPad can get access to all this data?”
Amr Awadallah: “Yes, it's an app. Yeah, it's an app that you just put in your phone. Yeah, exactly. Of course, you have to pay them. You have to pay SonoSim now. They sell that. But that's what we're going to see in the future. We're going to see these kinds of apps appear for everything around us, for any piece of knowledge that we have, that will make the average of us become experts. And one of the best examples I always like to highlight for this, that all of us use today, is Google Maps. Google Maps has a ton of AI, a ton of AI working in the background. I get many people telling me, "Oh, I'm afraid of using AI. I would never listen to AI." And I just tell them, "Are you using Google Maps?" And they say, "Yes." "Are you going right when it says go right?" "Yes." "Are you going left when it says go left?" "Yes." "You're listening to AI right now." Right? So I just like to remind them of that. But the reason why I love Google Maps is that it is one of the best canonical examples of how AI took us from being average navigators, very few of us knew how to get from point A to point B. Only the advanced taxi drivers and limousine drivers knew all the shortcuts and how to get somewhere. And now all of us became experts. And we can go from anywhere to anywhere, factoring in the traffic, taking in the shortcuts, etc. And that's exactly what's going to happen in the future. And that's what we want to enable at Vectara. We want to enable that ability of taking the knowledge that we have, so that we build these very smart assistants or apps that are working with us to make the average of us become an expert. This is really the vision of the future.”
Julien Redelsperger: “If I want to use Vectara, so what I do, I just feed your system with lots of data, accurate data, that's my responsibility. And you are making sure it is appropriately managed, filtered, and you get all the answers.”
Amr Awadallah: “And comprehended by the AI system to give you back the proper responses while solving some of these key issues that we highlighted earlier. How do you minimize the hallucination that systems can sometimes make up stuff? You minimize that by having fact checking. That's very simple. You need to have a fact checker that once the response comes back, it checks that response versus the original facts and make sure there is nothing made up in the response. Just like, by the way, before you publish something in all the major media outlets, like New York Times or whatever, any respectable big media outlet always will have the reporter writing the article. And then separate from the reporter, there is a fact checker checking the reporter because the reporter can make up stuff. Right? So it's kind of the same analogy there. So we have that built in our system. So we check for hallucination, make sure the response is accurate. We suppress copyrighted information that doesn't belong to you from being produced by the model. So to avoid you getting in trouble because of that. And we have seen now how actually New York Times is suing OpenAI over the fact that they trained their large language model with some of their data. We mitigate bias. So we try to get all the points of view around the question. Sometimes when you're asking a question, the answer is clear. For example, who is the president of the United States? Biden. Like that's a factual response done. That's a clear answer. But then there's other questions. Should AI be regulated? That's a topic of debate now. So you want to make sure you're not biased towards one side or the other. You want to make sure you have a large language model or an AI engine that takes into account all the points of views while it's making its response. So that's another thing that our system is able to do as well. And then last but not least, if you're using it in a business context, meaning within a business organization, the response from these AI assistants should change depending on who is asking. If I'm the CEO of the company, I should be able to see all the data. You should answer all of my questions. If I'm somebody in customer support, no, there will be some things I can see and some things I can't see. So when I ask the AI assistant, you should be careful what is saying back to me. That's called access control, the more technical term for that. But that's another problem that we solve for as well.”
Julien Redelsperger: “When did you create Vectara?”
Amr Awadallah: “We launched our very first product into the market in October of 2022. But that was beta. Our official general availability release was done last April.”
Julien Redelsperger: “So you didn't wait the hype around generative AI to launch your company?”
Amr Awadallah: “Yeah, we were before. We were before ChatGPT came out.”
Julien Redelsperger: “What's the story behind it? Why did you focus on such a specific market and business need?”
Amr Awadallah: “That's an excellent question. It came out from my experience and my co-founder's experience at Google. So at Google, I was at Google before founding Victara and my co-founders, both of them also were at Google. As I told you, we saw Meena. We saw the beginning of these large language models that can truly understand our text and our words, whether that be in English or French or Japanese or Chinese in the same way that we understand them. That was clearly an inflection point. Like when we saw that, like, wow, this is the first time that we've been trying to do this for 70 years. We had never figured out how to understand the text. And now we finally understood it. So since we witnessed that at Google, that for us made the light bulb just come on. This is going to be a major, major change in how things will be done in the future. And that was the impetus behind us leaving Google and forming Vectara.”
Julien Redelsperger: “And so hallucinations… you noticed quite quickly that that's going to be a big issue for generativite AI….”
Amr Awadallah: “Correct.”
Julien Redelsperger: “Do you remember the first time you noticed some hallucinations and you told yourself like, "Hey, this is a problem I need to fix"?”
Amr Awadallah: “Yeah. I mean, the first thing most people do with these large language models, and I did it with Meena, that engine at Google, a couple of years ago, was to ask about yourself, right? So I asked the question, "Tell me about this guy, Amr Awadallah." And I gave this amazing response. "Amr Awadallah did this and did that, and he got his degree from there." I was like, none of it was true. Like, wow, who is this guy? So that was my first experience with how they can make up stuff and they can make it up in a way that is so eloquent and so well said that you almost believe it's correct, unless if you were an expert, then you would know it's not. So, but clearly, that obviously was a problem. First of all, it was very impressive that they can say things so well and proper and perfect grammar, perfect English, everything, and not just English, any other language as well. That was very, very, very impressive. But of course, it was clear that if you're gonna use this in a business context, whether that be finance, whether that be marketing, whether that be customer support, legal, you name it, you just can't have that. You can't have somebody who is, sorry for part of my French, bullshitting you all the time. Right, you need to be factual in what you say. Though we know people in our jobs sometimes that do that actually, which is very, very interesting. So it was very clear to us that that absolutely was a problem we needed to solve if this technology was gonna be adopted, especially in business environments. For consumers, you can get away with that. And by the way, in some use cases, like if you're making a movie script or if you're writing a novel or writing a poem, hallucination is a feature, not a bug. Hallucination is a good thing. You want it to hallucinate, right? When you're trying to make up a new sci-fi novel with a new story, you want it to make up new types of aliens that might come and visit us or new types of technology that we're not thinking of. So that becomes an advantage actually in that case. But in a business context, no, it's obviously not an advantage. You need to be very, very factual.”
Julien Redelsperger: “So this is just finding the right balance based on the business case…”
Amr Awadallah: “Exactly.”
Julien Redelsperger: “When I talk to a lot of people about AI, what they all tell me is like, this is transformative. This is changing the world. This is changing the society. The question is how do we adapt to that? Is it going to change the workplace? And how do we prepare future generations to working alongside with AI technologies? Do you have insights to share?”
Amr Awadallah: “Yeah, that's an excellent, excellent question. First, I don't know the answer because that answer requires you to travel into the future and see what we have done and what we have not done and do it in the right way. But our best hope whenever we're faced with the transformational inflection points like this is to look at our history and look at our past and see what we can learn from it, if anything there. And definitely one of the most impactful and similar inflection points would be the industrial revolution, right? So in the industrial revolution, we transformed how we make stuff, right? We used to make stuff with our hands to make a piece of cloth like the T-shirt I'm wearing right now. We had to get the cotton from the field. We had to make the cotton from a plant into the threads. We had to put the threads together to make a piece of cloth and we had to cut the piece of cloth to make the shirt. And now machines can make that in a blink of an eye. And clearly many people back then, if you go back to the industrial revolution, by the way, were very worried about it. They were objecting to it and say, "Oh, we can't replace all of these jobs with machines. "We have to keep making stuff with our hands. "It's way better quality." But the lesson learned from back then is efficiency always wins. Efficiency always wins. You cannot stop in the face of efficiency. That has been proven over and over again in history. So if I can make a thousand shirts per day using machines and I can only make a hundred using a hundred humans, the machine will take over, right? And that happened already. And whether that be building skyscrapers with cranes or in build orders, et cetera, et cetera. So what do we learn from that wave? We learn from that wave is number one, efficiency always wins. And number two, those of us that know how to embrace that efficiency are the ones that keep their jobs, right? So the factory workers back then that knew how to operate the machines, they stayed. The ones that wanted to keep doing stuff with their hands and didn't learn how to operate the machines, they lost their job. So my advice to everybody in this wave we're going through, and by the way, this is gonna happen way quicker than the industrial revolution. This is moving at a way faster speed because of all the innovations we have done since then. My advice is you need to embrace this technology. You need to learn right now. Don't wait, don't wait for AI to come to you. You need to go to AI. You need to seek knowledge of how can I make my workflow. If I'm doing a podcast like this one we're doing right now, how can I leverage AI to help me come up with really good interview questions that they can ask and even maybe listen to my speaker as he's speaking, converting their voice into text and analyzing what is a good next question to ask them in real time and helping me out so I have a much more better, engaging, viral podcast. So that's just one example. Every job, every job AI can help you right now. And if you're not seeking to learn how that can make your job better, you will, my concern is there will be another one of your coworkers that will learn how to do that. They will become 10 times more productive than you. That's what will make you lose your job. So my advice is don't be afraid of AI. Learn how to use AI to your advantage.
Julien Redelsperger: “Do you think AI is going to target more white-collar jobs compared to blue-collar jobs? People that are creating stuff with their hands would be less impacted by AI than, I don't know, a content marketer, a business consultant, etc. ?”
Amr Awadallah: “So yes and no. So I would say yes, AI by itself, yes, will be, is more focused on us, the knowledge workers, right? So AI is definitely more focused more on knowledge workers, but AI plus robotics, that is also going after the white-collar workers. So we have seen this year also, last year, we saw the beginning of it with, of course, all of the amazing videos that keep coming out of Boston Dynamics. Boston Dynamics is this company building these very, very agile robots, but they're very expensive. They're very high-end. They're only going to be used in very few use cases. And then Tesla decided to jump in and Tesla is now working on something called Optimus. And if you look at how quickly it's learning and adapting, it's a humanoid robot that's learning to walk, that's learning to see what we do. Like if I cook eggs, how do I cook eggs? And it learns from me, how you hold the egg, how you crack the egg and how much longer, how long do you wait. As these robotics also advance, maybe it will take a bit longer. It's not going to take like five years, like for normal AI, maybe it'll take a bit longer, but very quickly I can see them becoming, "Oh, you have a robot now that's cleaning your house for you." And that's the intention with Tesla, is to make these robots so cheap that you can buy them in your home. At CES, CES is the Consumer Electronics Show in, I think it's in Vegas, a number of big companies like Samsung and LG and others have been announcing products that are moving in that direction of having these very smart assistants that can help with physical tasks and not just mental tasks. So I would say we're all exposed to it. And in fact, for my kids and for the younger generation, my advice to them is the number one skill you need for the future to learn is how to learn, is to be curious and how to learn. Because we had the luxury in our generation, we can keep our jobs for 40 years, 50 years, keep doing the same job. You will not have that luxury in your generation. Your job might be augmented very quickly and you might have to learn a new job. So the best skill set to have is the skill of comprehension, curiosity and understanding, and not being afraid from continuously improving."
Julien Redelsperger: “So what you're saying is soft skills are somewhat more important than hard skills ?”
Amr Awadallah: “Yes, exactly.”
Julien Redelsperger: “And what do you think the mindset of employers and companies is as of today? Do they understand the potential and the impacts of generative AI in the workplace?”
Amr Awadallah: “Again, as I said earlier, efficiency always wins. And that's what's triggering all of the companies and all of the businesses around the world and governments as well to be on this race to how we're gonna adopt AI. Because they know if I can, again, if you go back to the industrial revolution, if I can make a thousand shirts per day and a set of 10 shirts per day, I wanna get that. And if I am now in the modern day and I'm a law firm and my law firm now can handle a million contracts per day versus just a thousand contracts per day by leveraging AI, what am I gonna do? I'm gonna try to leverage AI as fast as I can because if I don't, I know that my competition will and if they figure it out first, I'm dead. It's not just like I don't have business, like my business will go down, I'm literally dead. And that happened during the industrial revolution. Like the countries and the governments and the companies that figured out how to adapt to the industrial revolution like Germany, like the US, like Korea, like Japan, they truly become the leaders of the economy of the world. Because they now, their throughputs became way bigger than everybody else who was still trying to do things the old way. And that's exactly what's happening here, except now for mental tasks, not just for physical tasks.”
Julien Redelsperger: “So we are in the middle of that AI, generative AI wave that started a couple of years ago. If we go into the future, what would be your predictions for generative AI? Where are we gonna be in two years, five years, 10 years?”
Amr Awadallah: “So that is very hard to predict the future. If I could, I'll be a super, super wealthy man. But again, studying the past and then making some calculated hallucinations with my brain on the probabilistic outcomes in the future, I am a very optimistic person in nature. So my belief is this technology will help us do a lot more, will help the average person do a lot more, right? So on the creative side as well. So what I mean by that, meaning look at you right now doing this podcast with me. You couldn't have done this 20 years ago. 20 years ago, you would need a massive investment. You would need that big crew of engineers around you operating all of the different technologies you need to be able to do this podcast over long distance with somebody in another country. You wouldn't have been able to do it, right? And now because of the advancements in technologies, it's costing you a few hundred bucks to record this podcast with me and produce it and maybe a few hours of work. And that now is gonna happen for many, many things because of generative AI. So imagine again, imagine you creating an app, you knowing nothing about development, knowing nothing about programming, nothing about coding, being able to describe the app. This is what the app should look like. Describe Snapchat, describe what Snapchat is supposed to do for you. And you think that's a great idea and the app comes out. So now an average person is, the average creative person is able to create an app without having to have to be a coder. That opens up a whole set of creativity now that maybe there will be new apps that only people like that are able to think of that coders could never think of, for example. Another example is creating movies or TV shows. Today, it's easy for you to do a podcast like this. Today, it's still hard for you to build a movie like "The Avengers," right? Or a movie like "Superman" or "E.T." or whatever. Like name your favorite movie. "The Matrix" is my favorite movie. Like it's very hard to make a movie like that. Like it's very expensive, very hard. That is gonna change a lot in the next few years. There is companies like Runway ML, for example, that are at the beginning of being able to do that for you. Where you just describe a few words of what you're trying to make, and the video comes out with the special effects, with realistic looking characters, with the clone of the voice of whoever you have chosen, assuming that they approved for you to use their voice in your movie. And the movie just comes out on the other end. So I see that happening across many, many, many, many industries where now we as humans will be able to do things we couldn't do before. So that's what excites me. Students studying for education at school will have their own one-on-one teacher that is the best teacher in the world, right? That mimics the best teacher in the world and indicates them in that. And lots of studies has shown, if you have private teacher, if you have one-on-one teaching where the teacher is with you privately, lots of studies has shown that your performance on average will be one standard deviation, if not two standard deviations better on average, if you have a private teacher teaching you. So imagine now in a world where all of us have private teachers really paying attention to us and really teaching as well. Imagine all in the poor parts of the world, or even the rich parts like the health world, the health system is messed up. You now have a very good AI doctor that truly is reliable, does not hallucinate and can solve almost 90% of your medical questions, right? So you only need to go to the doctors for the hardest ones, bringing down the cost of healthcare and making healthcare now available to every single human on this planet on their phone. So there is so many optimistic things I'm looking forward to. That said, at the same time, will jobs be displaced in mass? Yes, jobs will be displaced in mass and that is very, very worrisome. How will we as governments react to that? The most likely scenario that I've seen from many researchers, and again, I'm not an economist myself, but the most likely scenario is what's referred to as minimum basic income. Like many, many governments now will, the companies that know how to leverage AI will become very wealthy because they will be very efficient. They will have very high profit margins. They will have to pay bigger taxes. They will have to pay bigger taxes. And then these taxes will go to the governments, which then pay them back as minimum basic income to the citizens, so that these citizens can buy the services from the AI companies that are making money. Otherwise that loop collapses. Like by definition, that loop will collapse if you don't have this nice cycle. If you don't have people with money, then who cares if you're making a great product? Nobody's gonna buy it. So that's kind of what I'm predicting will happen 10 years from now. One last prediction I'll make for you as well, when you ask me 10 years out, that's a very long time away, 10 years out. But one of the very interesting technologies I'm seeing right now evolving very quickly is technologies like Neural Link, which are chips that we embed in our brain with sensors that can read our neural network. And as we get closer to that now, we're gonna have an even tighter collaboration between us and the AI. Like today, still to talk to the AI, we have to do it through a phone or through a laptop or whatever. Imagine if the AI is very close to your head now, where when I look at you, I immediately remember, because of the AI in my head, I immediately remember, when is the last time I talked to you? What's your name? What's your kids' names? What's your hobbies? Which movie is your favorite movie? Just comes back and we can resume the conversation exactly from that point. Or imagine I look at a very hard formula and I haven't studied algebra at school, but then the AI helps solve it for me and the answer shows up in my brain. Oh, that's the right answer for this formula right there. So there is that aspect as well, is there might be this very interesting merge between AI and us that evolves us into even more higher creatures. That's gonna be, again, that's science fiction now, it sounds like it, but believe it or not, the research for that is happening right now. The research for that is happening right now.”
Julien Redelsperger: “So we are definitely into "The Matrix."
Amr Awadallah: “Yes, I think! That's why movies like "The Matrix,"… "The Matrix" came out like 30 years ago. They're very visionary, that they foresaw things like that. But I wanna also highlight that I don't want people to be scared of AI. Like AI is not like Skynet and it's not like "The Matrix." AI has no will of its own. It's simply doing what we tell it to do. The fear of AI is some of us might tell it to do bad things, like in the same way that somebody can use the car to run over somebody else. Like that's not what the car was built for, but we humans, unfortunately, some of us are messed up and we do things like that. So what we're afraid of with AI is not AI in itself. AI would never wake up one day and say, "Wow, these humans are my subservient creatures." No, like there's nothing close to that even happening in anything we do, because it's all about intelligence. It's not about ambition. It's not about the need or the emotion or the social. There's nothing of that happening within AI systems. We tell it what to do. So the fear is all about either rogue governments or rogue individuals telling the AI, "Can you please help me create a new virus "that kills people in four weeks "and spreads very quietly like COVID did?" Like that will be the end of the planet. So that is the scary scenario, is somebody evil using AI in a bad way. But AI itself is benevolent. It doesn't do bad things without us telling it.
Julien Redelsperger: “I don't know if you remember, but like last year, I believe there was this open letter about key players in the AI ecosystem saying, "We need to slow down. "We need to pause innovation in the AI world." What did you think of it?”
Amr Awadallah: “That was very clear to us, like in the industry, in the Silicon Valley, working in AI, that was being done by some certain large companies, without mentioning names, that wanted the governments to add regulations to prevent the smaller companies from building AI and catching up with them. Like that, it was truly about that. And so they were lobbying for very strict government regulations that come in and make it very hard for any small company starting up to be able to develop AI in the open source or in collaboration with others, so that they become the only ones that control AI in the future. I could be wrong, but that's how I read that letter, to be honest.”
Julien Redelsperger: “Thank you so much, Amr. At the end of each episode, the guest of the day has to answer a question posed by the previous guest. After that, you'll have the opportunity to ask a question yourself. So here's your question, courtesy of Bodo Hoenen. He's the co-founder of NOLEJ, which is an ed tech startup based in France that use AI for education and instructional design. We can listen to his question right now.:
Bodo Hoenen: “What is it from your business products and solution that is uniquely human, that would provide tremendous amount of value in a world where AI can do pretty much anything?
Amr Awadallah: “There's three human problems that we need to solve for us to continue to survive, to be honest. The first one is climate change. So if we can apply AI in a very significant way towards helping us reverse that trend, which clearly all of us are seeing it getting worse and worse, that would be one that I would focus on very, very heavily. And the other two are education, which as I said earlier, having strong tutors that can educate our kids one-on-one will make a huge difference to their intellect and their skill sets in the future. And then number three is health. Health continues to be a big problem around the world, like even in rich countries. So having smarter AI systems that can help us maintain better health, and nothing is more uniquely human than that, in my opinion. So that'd be my answer to that question.”
Julien Redelsperger: “Perfect. I keep that, I make sure Bodo gets the answer. Thank you so much, Amr. So now what question would you like to pose for our next guest?”
Amr Awadallah: “So that requires some deep thoughts. Because you don't even know who the next guest is going to be. Ideally, the next guest you have is Elon Musk, but even if it's not Elon Musk, you can still provide this question to the person you're interviewing. The question is, is it really worth it to spend all this money and research to go to Mars? When we have this planet with so many problems that have not been solved yet, can we solve the problems on this planet first and get this planet to be more peaceful, more healing, more unified before we try and go and spend all of this money to move us to another planet? Like, let's fix this house first. What's your answer to that? That's my question. -
Julien Redelsperger: “Perfect. Thank you so much, Ammar. It's been an absolute pleasure speaking with you today. Thank you for joining me. And that wraps up another episode of AI Experience. Thank you for tuning in and we'll see you next time.”