Making Change

Episode 5: Future of Artificial Intelligence

Clark Nuber PS Season 1 Episode 5

From self-driving cars to virtual assistants, artificial intelligence is revolutionizing our world. In this episode, we explore its incredible potential, ethical implications, and the awe-inspiring innovations that lie ahead. Join our host Matt Sutorius as he speaks with Kevin Merlini, Entrepreneur in Residence at the Allen Institute for AI. 

Matt  0:12  
Welcome to making change the CPA podcast that has nothing to do with accounting and everything to do with innovation. I'm your host, Matt Sutorius. And today we're speaking with Kevin Merlini, entrepreneur in residence at the Allen Institute for AI.

Kevin, thanks for being on the show today.

Kevin  0:37  
Thanks, Matt. Excited to be here.

Matt  0:39  
Kevin, tell us a little bit about your role at the Allen Institute, how you got involved with the AI to incubator. Yeah, just give us a little background for our listeners.

Kevin  0:47  
Sure, um, Kevin and I were working with the Allen Institute for AI, which is a fairly world renowned innovator in large language models and other forms of AI was founded by Paul Allen and Oren a number of years ago. So that's the research arm of the institute. But they also have an incubator that kind of merges the, you know, the leading research they do with startups who are working in the field of AI, with the incubator does is provide support and resources, access to researchers, kind of advising in the early stages of the company. So we're actually starting a company within the incubator, looking at applying AI actually, within the accounting space. 

Matt  1:27  
Yeah, that's cool. I like the incubator model. Generally, I think it works well for super early startups, because you don't want to go spend a ton of money on the type of stuff they can give you. It's sort of a group rate, basically. And you get to work with other founders like, yo, it's a cool model. 

Kevin: Yeah, yeah. It's been great. 
 
Matt: What for you personally, like why artificial intelligence? Has this always been something you've wanted to work in? Or your recent interest? What do you like about it?

Kevin  1:56  
Yeah, so I think I've always been interested in technology generally. And then in college, I was writing my thesis, it was actually on student debt. But then I kind of went down this whole rabbit hole on like the impacts of automation in general, from an economic standpoint, and got really interested in like the impact of AI and automation on the economy. And then after school, and I was working at Amazon, and started working on applied machine learning sort of projects and getting into it within the scope of my job. And I've always been really interested in it from the perspective of like intersection of society and technology, participating in some of the stuff we were doing on AI policy. And then I ended up going to Facebook, where I was working on a couple things, one of which is building models to understand news content. So how do you understand the quality of news? Is it original? Is it informative, these really interesting technical problems, but also messy, like socio technical problems? Yeah. And also thinking about, you know, how do you align ranking and recommendation systems with human values? All of that in our weaved is AI, machine learning and how it interacts with, with society and people

Matt  3:13  
I mean that that type of technology has been around for a while now. But it seems like recently, over the past couple of months, everybody's talking about AI. There's front page news stories about chat GPT. All this stuff is really in the public discourse in a way it wasn't in before. What do you think the reason for that is?

Kevin  3:35  
I've seen the phrase, the iPhone moment thrown around a lot for GBT, it's really accessible, right? Like, for the first time someone can go in and easily interact with and leverage these capabilities in a way that they weren't before. And I think he kind of described that as magic the first time to use it. And then, you know, of course, what's happening, the language model side is happening at the same time as the generative image stuff like Dolly, which I think probably even drove it more because it's just so visible and so clearly viral. I think AI is an overall, it still is an overloaded term. It's been an overloaded term. But historically, it's kind of been more on like different advanced machine learning techniques to maybe predict one thing, whereas the latest stuff going on, and AI is much more flexible and much more closer to what people's sci fi vision of what AI should be.

Matt  4:28  
One thing I'm really interested in is this sort of ongoing debate of how do we use this in places like schools, where, you know, on the one hand, you might immediately think we have to ban any kind of AI, because it'll just write papers for the students. But then, you know, you take that one step farther. Well, if this technology exists, why not teach them how to use it really well and evaluate the output? Are there other things like that, you know, unforeseen uses or effects that you feel like this sort of technology is going to have that people haven't completely thought through yet?

Kevin  5:01  
The school use case is super interesting to me because it's really valuable right? On one hand you have everyone has their own custom personalized tutor infinite number of practice questions. It's personalized into your textbook, all sorts of things. But, you know, outsourcing, your thinking is probably not good. So like longer term education impact, hard to say, I feel like a difference between a calculator where you're outsourcing like arithmetic or something, to something that's outsourcing the structural thinking that you do. Like when you write an essay, it's not just about the thing at the end, it's the process of getting there. Right. And so that could be worse. But I think as far as I'm foreseeing defects, one meta point is just the AI themselves are complex systems. And society is a complex system. And so you have all these like, feedback loops that lead to unpredictable failure. So like, one predictable, known unknown is the fact that you have those feedback loops. And that could lead to some unpredictable failure. Yeah. And then maybe this one's not unpredictable. But I think that you're gonna see a very similar tech lash phase is that, you know, social media went through after 2016. And it's already happening to a degree but there's so many parallels with like content, moderation, governance, copyright, all of these things, rehashing re AI.

Matt  6:18  
Yeah, I read the same story about some New York Times reporter who had spent a day with one of the AI AIS out there. It was like a horror story of the weird things that it said, and it fell in love with him at some point in the conversation. And that story was everywhere for like a week. And I was thinking, what is this the backlash already? Like, we're barely even using this stuff. And already, you're getting this kind of media coverage? How do we, how do we deal with that? 

Kevin  6:48  
Yeah, I mean, I think the media is probably negative on things a lot. In classic reaction, and it's a shame that shapes people's perception of these things, unfairly negatively. But it's always good to, you know, be skeptical and be critical of changes that are happening. So I think there's probably positives to it.

Matt  7:10  
Seems like one, one helpful thing about this technology is that it's very user friendly, at least all the interfaces that that I'm exposed to, in my job, and even personally playing around with this stuff. The UI is a really, they're very user friendly, you go on and you ask a question, and it tells you something that I feel like that will make it an easier adoption for a lot of people, then, you know, the the idea of learning to be a coder was or a software developer when I was in college, that's hard. And you have to go to school for that and learn it. This seems easier. But am I just seeing the back end result of something people spent a ton of time on? And it's actually really hard to do? What do you think on that?

Kevin  7:53  
Definitely. So, so much easier, so much. I think that's also the question earlier about, why is it kind of blowing up in the same way. One point of view I've had on this is someone who's doing ml as part of your career, like, he spent all this time building a model, and a classifier that does one thing, it says like, this is this or this is this other thing. And you know, you have to get all these training data, and you build it, and you train the model, and it does one thing, and it's like, great, but, and yeah, in order to do that, you would have had to do all of this technical stuff along the way. But now in order to probably do that same task, I could just say, hey, GPP, like, what is this, you know, classify this into two buckets, and it will work and so all of these one, just like capabilities, are accessible that would have had the like, the things that were possible before, but hard now are just anyone can do immediately and now there's also all of these net new things that were never possible before.

Matt  8:52  
How is this going to impact your creative people? What is the role of AI in the in the arts, music, the more creative fields where it like you mentioned, it is able to generate images and you go on dolly and you tell it make a picture in this style with these elements and it does something super cool that I could never paint or create myself creative people partner with this and make it part of what they use instead of a competitor.

Kevin  9:24  
So many interesting things here. I think a real artists plus AI is probably going to be better ultimately them like me plus AI doing art.

Matt  9:31  
Yeah, mine, we're not very good.

Kevin  9:35  
I think it's a new medium. I think people said similar things about photography, right? Like, and if you look at art before photography, it was all very, like fine art and realism and then, you know, became more experimental or just different styles after that. And so I think he's changes things. But I wouldn't want to invalidate artists concerns like one there's there's just something about craft Probably shouldn't be lost. Right? And then there's also all of the valid concerns about copyright and attribution. You know, all that that's playing out. I think your question is also about how, you know, how can people partner with it? Yeah, I guess it's like, it's still an experimental phase. Yeah, you know, I saw something on the music side, which is a little bit behind, you know, images and videos. But there was this Drake AI cover or something that was fake, but it was very real. And it blew. And grime said that anyone, I don't know if how this played out. But she said anyone can go use her voice and her likeness and make music as long as she can get the royalties. 

Matt  10:37  
It's mind boggling to think about stuff like that. One of my favorite authors is Neil Stevenson. Yeah, he's in the headlines for tech stuff every couple of years because he coined the term, the metaverse back in like 1990. And his more recent books, they've had the prominent deep fakes and clearly, like aI generated things have happened in society, and people don't know how to respond to it. And I just wonder, Is that Is that what it's gonna be like? 15 years from now, where you hear a new Drake song You don't you don't know if it's Drake, or if it's crimes re mixing. Ai, like to, I don't know that I can wrap my mind around it, honestly, how much is the world's going to change? It feels like in a lot of ways, it'll just be baked into everything we do. cars will drive better, and there'll be less accidents. And this stuff about business that isn't very much fun, will ideally largely be done by different computer programs like this, that all sounds pretty positive to me.

No one has gotten a good handle on how to regulate things like social media, which at this point, it's not a brand new technology. So they're the government's any other regulators. They have no idea how to deal with stuff like this, in my opinion, do you think they should? I mean, should this be the kind of thing that is government regulated? Or can we trust private enterprise with with artificial intelligence? Where do you fall on that continuum? And where do you think society will at other big?

Kevin  12:15  
Man, but yeah, having been involved in some of the like social media regulation sort of stuff, or at least being impacted by as part of my work, there's, it's not as black and white, as everyone likes to think or portrays in different narratives. I think it's stuff because there's there's needs for regulation, there's also a need to balance that against stifling innovation. And then there's all sorts of other considerations about like strategic geopolitical considerations that are above my paygrade. But like you said that like, governments can't move fast enough. And it's not just our government or one government. And not all governments are good based on whoever's value systems you want to use. But at the same time, businesses incentives aren't as aligned as they would like to claim. And I think this is going to be a lot tougher to regulate in social media, because like social media is controlled by, you know, it takes network effects to build this big social media network that then actually has impact on people. Because, you know, because it's large, which means it's centralized, like there's a few nodes of influence and power there. But then, with this technology, it's, it can be sent, you know, like the weights of a model are decentralized, anyone can make, you know, run up a model on their laptop and create fake news or whatever bad thing they're doing with it. So I think it's also just harder to regulate. And the same problem with social media is the pattern of if it's a tragedy of Commons or another, the regulation in one country impacts the other and we don't have necessarily uniform values that are shared across countries. And I. And so there's, I think the least common denominator is probably the UN Human Rights Framework. But it's very basic. And doesn't get into all the other like hairy issues that come up and when need to be regulated.

Matt  14:12  
Yeah, that's gonna be a tough problem to solve.

What's your favorite portrayal? And maybe what's the most accurate portrayal of AI and like film, because this has been a thing people put in movies for years going all the way back to Terminator probably before then, what's one that we should look at and be like, oh, yeah, that could really happen. That's like a real thing.

Kevin  14:40  
Man. Lots of good ones. I was trying to think of the most realistic portrayal because that one's probably more probably more entertaining, because there's just somebody sitting in an office like this. Right? Yeah, Ex Machina is pretty cool.

Matt  14:57  
Yeah, I like that one. That's what I was. Pick into.

Kevin  15:00  
But the one that actually maybe is like, most accurate in terms of feasibility or and weirdness is her Yeah,

Matt  15:08  
I thought you're gonna say that. Yeah,

Kevin  15:10  
that's already happening. And we didn't talk about in some of the weird unforeseen impacts and stuff. But there was all this outrage when they took away some of the explicit nature of what those Forget it was character, one of the other providers and people were really upset because they kind of had their virtual partners online. Right? Yeah. Decentralized decision maker like lobotomize them or, you know, altered their personality in some way.

Matt  15:37  
It brings up so many second order problems to think about what rights to does an AI have, what happens when people inevitably which is already happening, fall in love with AIS, and want to marry them? What and when those are switched off, isn't murder, it just it's it's once again, stuff way above my pay. What does this look like 10 years from now, 20 years from now, we talked to a guy on the podcast a couple of weeks ago who said that he he had arrived in by the year 2050, we'll have like, Nobel Prize winning discoveries on a weekly or daily basis from AIs. Is that reality? Do you think we're really going to see that? Is that Is that where we're going with us?

Kevin  16:25  
So there's like a few factors influencing health plays out. A lot of people think that there there will be continued performance gains. And humans in general tend to underestimate exponential growth. So if the gains continue at this, like predicted rate, and it is truly exponential, then I guess things get really weird really quick. But you know, I don't know how that plays out. I do think that you'll probably see like ubiquitous, you've talked about that earlier, just kind of embedded in everything, everyone soon, I think 10 years is so far. So I have no idea. You know, I think some more shorter term things. Right now, everything's kind of online, and then the cloud. And these large models, but I think shrinking models, models that run on your phone will be pretty soon Apple probably will come out with their own optimized for their chips. And then it's it's really fast, it's private, there's no going to server anything. I think the model ecosystem, right now you have whatever the best performance is run by like open AI or one of these large labs. But over time, you'll probably get like n minus one. So like slightly worse, but still really good. Because everything will continue running open source, which will be interesting. Yeah. And then the other areas that are interesting is like agents and networks of agents. So AI, that you give it a task, and then it says, Oh, what are the tasks that I need to do? It makes a plan and then it goes, goes and does them. Because it's also has the ability to like interact with and use software. Right? I think that are all things that it's like, maybe they sort of exist now. But if you ask him five years, they exist, but they work really well. And that would be really interesting.

Matt  18:07  
I was sitting this morning with my wife having coffee at like 6am. And she said she read this thing where in AI had passed the bar exam, pass like the whatever exam you take to become a doctor like a medical doctor, but keeps failing the CPA exam, the Certified Public Accountant exam. She said the takeaway from that is that the bar exam and apparently medicine are simple. They're similar structure that may I can understand it. Our accounting rules are so ridiculous that a computer program can't parse it correctly. I don't know if you saw that or if it's accurate, but

Kevin  18:46  
I have a slight slightly different take, which is those those papers looked at applying like an off the shelf large language model directly against the you know, the CPA exam. And it had varying performance on different sections. So sections that were largely like knowledge based or reasoning based, it did really well on and it can handle, but large language models out of the box aren't good at math. Okay, this is not what they're meant to do. But that's fine. That's easy to mitigate and overcome. You just have the model say, Oh, is this does this involve math? Then let me go like ask this other tool that does know how to do math. And then when you use those tools, it's actually like it's perfectly capable of handling it. Okay,

Matt  19:28  
good. I was worried that we screwed up the accounting standards so much that it was on parsable by any technologies that makes me feel better.

Kevin  19:36  
Yeah, I do think that the interesting thing is, you know, there's it, especially in accounting, a lot of areas of grain interpretation. And so it's not even that they're, you know, there isn't always an answer, right, or there's just interpretations, but I think that that's what they're actually good at. It's not about getting this is the one right thing. It's about what's the decision space or potential outcomes and understand and considerations and helping them navigate that. And figuring out the pieces of information. I think it's actually equally good at those sorts of things.

Matt  20:08  
But it's by the the kind of stuff that you're working on where you can have a conversation about an accounting question, or the accounting rules or accounting standards, that's so much more valuable. And it that probably extends to lots of other industries too. But when you don't have black and white rules, being able to converse in and figure out what's the right question I need to ask here. And you know, what things do I need to consider? It's more of a discussion than just to answer, because if it was just an answer, everybody could Google it. And that would be easy. So I think it'll be really useful in the future, stuff like that.

Kevin  20:44  
Yeah I'm excited to see how it plays out. And hopefully, hopefully, it is net beneficial for society. I think it can be in a lot of ways.

Matt  20:52  
Well, I'm excited about it. Thanks to you, Kevin, for being on the podcast today. Great conversation and really appreciate your time.

Kevin  20:58  
Yeah, a lot of fun. I'm sure we could just talk about this all day.

Matt  21:04  
And that's our show. Thanks to Kevin for speaking with us and to you for listening in. Join us next month as we discussed the future of investing with Josh Hale, CEO and co founder of Citizen Myth.