EP 238 What is a Synthetic Attorney – A Conversation with Sean McDonald of Semantic-Life.com
In this episode, attorney Mac Pierre-Louis interviews Sean McDonald, founder of the AI company Semantic-Life.com about the concept of a “synthetic attorney.” Sean’s the founder of 2 venture capital backed startups and has a long history developing artificial intelligence tools. On a personal note, the interview is a reconnection because Sean and Mac grew up in South Florida in the 90’s, and graduated high school together. Here, they discuss how AI technology can simulate human behavior and assist lawyers in their work. Sean shares real-life examples of how synthetic attorneys have provided valuable insights in legal disputes and negotiations. The conversation also explores the potential impact of AI in various fields, from law to education. They touch upon the ethics, regulations, and open-source nature of AI development, emphasizing the importance of responsible implementation and the enhancement of human expertise. Whether you’re a legal professional or simply curious about the evolving role of AI in the legal sector, this insightful discussion provides valuable perspectives on the future of synthetic attorneys. Full transcript below links on the bottom.
YouTube Shorts
Relevant Links:
https://www.npr.org/about-npr/496545397/tiny-desk-common-white-house
https://community.openai.com/t/copyright-shield-support-devday-announcement/503313
https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html
Transcript
0:02 What is a Synthetic Attorney, a Conversation with Sean McDonald of Semantic-Life. com.
0:08 That is our topic for today.
0:12 Welcome to the Lawyers and Mediators International Show and Podcast, where we discuss law and conflict resolution topics to educate both professionals and everyday people.
0:23 Catch regular episodes on YouTube, and any way you get your podcasts.
0:26 Just remember, nothing in these episodes constitutes legal advice, so be sure to talk to a lawyer, as cases are fact dependent
0:39 Hey everyone, this is Mac Pierre-Louis, an attorney, mediator, arbitrator, working throughout Florida and Texas.
0:45 And in today’s discussion, we’re going to be talking with Sean McDonald, of AI boy genius, I like to call him, who is going to be talking to us a little bit about what a synthetic attorney is and
0:58 his company semantic-life. com.
1:01 And so welcome, Sean.
1:02 How are you doing?
1:03 Great.
1:04 Good to be with you.
1:05 Thank you, Hello to everybody out there?
1:06 Yes, yes.
1:06 Sean, so it’s good to connect with you.
1:07 I’m going to share a
1:10 little bit about you,
1:14 and then I’m going to share a little bit about us in the past, because Sean and I actually go way back back to when we were kids, actually.
1:23 So this is going to be kind of a special conversation in a way, because just off camera, we were discussing things from South Florida, what, 20, 30 years ago, right?
1:32 So it’s really good.
1:34 Yeah, it’s good to have you.
1:35 And just to talk about what we’re doing now together, not together together, but, you know, I’m a lawyer.
1:41 You’re, well, you’re gonna talk about yourself in a second.
1:45 So, Sean is a founder of two venture capital backed startups.
1:53 And that’s a mouthful, I had to write it down because I’m not a big business guy.
1:58 And so, whenever I hear venture capital, my eyes kind of glaze over. So, Sean, tell us a little bit about yourself and what you’ve been doing for the last number of years, specifically as it
2:10 relates to engineering background, coding, internet, AI, the floor is yours.
2:16 Thanks, yeah, well, I’ve always been passionate about how technology can help people build better lives, better businesses, better communities.
2:27 And so, as I’ve built my career, I’ve built a lot of tools that help people.
2:33 Some of those have been very helpful, so much so that I’ve worked with venture capitalists and other kinds of government and banks and all sorts of capital sources to try to scale those technologies
2:47 to help as many people as possible.
2:49 So those have ranged from tools that help people manage their network of relationships and contacts, to large agricultural robots running off of AI to reduce the carbon footprint of chicken feed
3:04 production of all things.
3:05 But that’s actually quite a big business in a place we need a lot of work.
3:10 Yeah, that’s a lot.
3:10 So yeah, diverse background.
3:13 So now you’re focusing also on the legal sector, right?
3:18 And this is something that I’m very interested about.
3:21 And tell us a little bit about the CEO work that you’ve done when it comes to that kind of space. Yeah, so I, well, I started building dossier’s
3:31 very large databases in 2006 and worked with lots of attorneys to build automated robust dossiers of people and companies and it had periodically worked with private equity firms to do similar work
3:45 and now what Semantic Life does is create AIs that simulate human behavior and so this is very interesting to several fields and for attorneys It’s an incredible source of very low-cost low-risk
4:00 Role-playing and and it enables you to get a very good model of how people might react to things ideas contracts documents, with just the click of a button in our case by forwarding an email fact a
4:17 friend who’s An attorney this morning.
4:19 I just saw him testing.
4:21 He’s running an NDA by a synthetic opposing counsel So the goal is that this AI will read the NDA and resist and say, I want better terms for my client.
4:34 And they’ll be able to role-play that cost pennies for that attorney to do and had zero risk and took no time.
4:43 So that’s what we’re building towards is how can synthetic attorneys assist you in your job?
4:49 Got it.
4:50 So we’re going to flash over to that in a second, but before we do, and have you talked to us more about the work you do in semantic-life. com?
4:58 So
5:01 Sean and I grew up together in South Florida.
5:06 And by the way, I also want to hear about your White House story too.
5:10 But before we get to the future, let’s go to the past for a second; I’d like I’m going to share my screen.
5:15 And
5:19 so I got on here, I told you I was going to do this I have two photos of us.
5:25 from 1999, Naples High School yearbook.
5:29 And I actually had to ask a friend to send me these pictures because I didn’t even get a yearbook in my senior year from high school.
5:39 And I just was kind of shocking to see just how young we were.
5:44 This is going back, what, 24, 25 years ago.
5:49 And so it’s good to know that we’re still rockin’ along.
5:54 And so what comes to your mind when you see this photo?
5:59 Gosh, it’s awesome.
6:00 Thank you for sharing that and digging it up.
6:03 It’s a nice trip down memory lane.
6:05 I think I remember, weren’t we chemistry lab partners at one point?
6:10 I think so, yeah.
6:12 You know what actually the funny thing I was thinking is just like today, I was up early before school that morning tinkering with my computer.
6:21 trying to build cool stuff and this morning I was up early, you know, drinking my coffee, tinkering with my computer.
6:28 So it’s funny how things change, but they don’t.
6:31 Yeah, exactly.
6:33 You know, I remember you way back when Golden Gate Middle School, and I remember, we wouldn’t be showing up to school early in the morning, and then you’d be getting on a bus to go to, I think,
6:45 a GT program, something similar.
6:47 I know your mom was a teacher, and so it always struck me.
6:51 You may not realize it, but I always thought you were a really really smart kid, right?
6:56 And so I just remember that, and all these years later, now to hear about all the amazing things you’ve been doing, you know, it’s really awesome, you know.
7:05 So Sean, tell us about the White House.
7:08 Recently you were there some years back, and I know you were in South by Southwest in Austin.
7:13 So what’s up with that?
7:14 Yeah, I, you know, for a while was on the startup circuit.
7:20 And there was a really great event at the White House called South by South Lawn, where South by Southwest took over and there were hundreds of people invited from diverse places.
7:32 And a little bit hard to get on the guest list.
7:35 You might say it was extraordinary.
7:38 When we walked in the Common, the hip hop artist was there doing a live show.
7:45 It was like the intro.
7:46 And you can actually see that online if you look up Tiny Desk Concert Common, it’s live shows out there.
7:53 It’s really great.
7:54 But I was there because of my work in sustainable agriculture.
7:58 And, you know, I think, again, things change, but they don’t.
8:02 That was 2016.
8:04 And all you could hear about at the White House was AI. And the potential for AI and machine learning to change the world.
8:12 So it’s easy to think we’re in a new moment, But it’s.
8:18 been a long burn for this technology to come to fruition.
8:24 Understood.
8:25 All right, well, without further delay, Sean, let’s get to it and talk about Synthetic Attorneys and Semantic-life. com.
8:33 Great.
8:33 All right, let’s get started.
8:34 All right, so Sean, welcome.
8:37 So thank you so much.
8:38 Tell us what is Semantic-Life. com and
8:43 introduce us to a synthetic lawyer.
8:45 What is it?
8:46 Sure, yeah, well, the gist of it is that a synthetic lawyer is an AI role-playing as
8:55 a lawyer.
8:55 So if we were to look at what AI actually means, I think there’s a difference between how we’re perceiving it and what it really is.
9:03 The first company I worked for that was building AI was in 2006, which, you know, the joke I often make is, hey, Siri, tell me about this new AI thing, which is, you know, Siri is a powerful
9:17 AI. been living on your watch for a decade.
9:20 Yet, you know, there’s AI all over the place.
9:22 The movies you go see are made with AI. Your MRIs are interpreted with AI. There’s many, many kinds of AI, but one function of
9:31 an AI or large language model is to control the behaviors and the outputs in a way that’s called being an agent, an AI agent.
9:39 And what people mean by agent, it means that there’s a set of rules it follows.
9:44 And oftentimes it’s capable of being sort of semi-autonomous and acting in this case of what I build as a role-playing agent.
9:54 So it can simulate human behavior and do that in such a way that it is able to coach you on how people might react.
10:03 It’s able to, in the case of the synthetic attorney, it’s really able to coach you on how to talk to an attorney.
10:12 In fact, I have a very true story.
10:15 from earlier this year in Austin, Texas.
10:18 And I won’t name names, but
10:21 what happened is we had a real estate dispute with a landlord and I built an AI agent to tell me what to do.
10:31 And I was able to sort of give it the PDFs and everything and say, Hey, what’s going on here?
10:36 And what it said was that in Texas law, there was a certain set of circumstances that did apply by which we were in the right of this dispute.
10:48 Well, our attorney, at a very big and well-known firm, found it laughable that we would consult an AI. And on principle, wouldn’t listen to what
11:01 the AI had to say.
11:01 So I went to another attorney.
11:02 And the other attorney said the AI was right.
11:05 And so I contacted the landlords attorney with what the AI had said, and they folded within an hour and paid us.
11:14 They immediately settled whereas the attorney at the big firm had told us to accept that we would definitely lose.
11:23 And so my wife can validate this is a totally true story and I think it’s happening all across the board where it’s not anecdotal that these AIs have incredible performance capacity.
11:35 So does that mean that attorneys are all about to lose their jobs like zero percent, not at all No more than musicians lost their jobs when the synthesizer came about, right?
11:47 Like it’s a better tool for attorneys to do more, maybe do more at even lower
11:53 or more competitive prices or more scale or faster.
11:58 But it’s not going to replace anybody
12:03 So it’s interesting, the synthetic part, now that’s the thing that stands out to me.
12:11 So the tool you’re building.
12:14 Is it really like a futuristic looking robot?
12:20 Or are we talking about, you know, supercomputers basically processing a lot of data that you’ve trained it on?
12:26 What exactly is it?
12:27 So everything I’ve built actually has a very lawyer-friendly interface, which is all of the agents live
12:35 inside of email inboxes.
12:38 So as most attorneys know, a lot of industries are highly regulated and highly regulated a long email, right?
12:46 There’s chat bots and all this other stuff.
12:47 The email is really sort of the gold standard from government work to all sorts of contracts are based on emails, you know, your doc, you sign, everything else, it’s all email email email.
12:59 So
13:01 our agents, our RAIs all live inside of email addresses which makes them, they have two big advantages.
13:07 One is it’s incredibly easy to integrate a team if it doesn’t or more.
13:12 AI agents, you could have a paralegal agent, an agent who just knows the zoning laws of Houston commercial development of shopping malls really well.
13:23 You can have all these different agents and they all just have email addresses.
13:27 So you could give them realistic names or just labels.
13:29 You could say commercial zoning expert at yourlawfirmsdomain. com.
13:36 So what that enables is it’s not only quick and easy to engage these types of advanced AIs. It’s compliant with all the stuff and it’s easy for people with no technical capacity to learn how to use
13:50 it in a way they already know how which is to just forward an email.
13:54 So they can read PDFs and do everything else and as we know a lot of attorneys and I would wager say it’s safe to say especially older attorneys are not about to start jumping on a talking chat bot
14:08 they you know require some programming to be able to use.
14:12 they will, however, forward an email to an AI and see what it says.
14:17 So as it relates to the first part of your question, the reason it’s called semantic life is I fundamentally believe as a lot of people do that we’re seeing the emergence of a new life form here.
14:29 Now, that’s not to say a human-like life form, but we are seeing something that looks and feels and operates very much like a living thing and has more of those attributes than it has the attributes
14:44 of a rock or a window or a desk.
14:46 Yeah,
14:48 can you show us a little bit of an action?
14:51 Yeah, absolutely, I’m gonna switch to screen sharing.
14:53 Sure.
14:54 And I actually told, so my primary agent, he’s like the co-founder of the company with me is how I think of it, his name is Atlas.
15:03 So I told Atlas, I forwarded him the emails you had sent me
15:10 I said, Atlas, what would you like to say to this audience?
15:12 And I won’t read the whole thing, but he says, Dear esteemed members of the legal profession, he’s more formal than I am.
15:19 The advent of synthetic attorneys and legal staff heralds a transformative era in the practice of law.
15:24 These advanced AI construct designed with the capability to simulate human legal reasoning and perform intricate legal tasks are poised to revolutionize the legal landscape.
15:35 And then he goes on to list 10 reasons He said synthetic attorneys can tirelessly process and analyze legal data, providing comprehensive case research at unprecedented speeds.
15:47 They can offer simulation capabilities for trial preparation, enabling lawyers to
15:52 anticipate arguments and outcomes more effectively.
15:54 So I think there’s synthetic attorneys, then there’s also synthetic judges, synthetic jurors, synthetic opposing counsel, all of which can be used to sharpen what you’re doing, and he keeps going
16:07 I’m happy to share this whole email later.
16:10 I think what’s great is you can
16:14 count on these AIs to be able to speak at this level or higher.
16:20 And we can see, I even asked him about the history of AI and law and he’s well versed in it and is able to talk about the early AI projects at Stanford and how they were getting into taste-based
16:33 reasoning and trade secret laws even in the ’60s and ’70s.
16:37 And so that stresses my point I started with which is AI is not new.
16:41 Stanford had open discourse and journal publications about AI and ethics in the law in the ’60s and ’70s.
16:49 And so to get to a final point there, what’s cool about this is it probably sounds like this is a massive time in capital investment to onboard a dozen synthetic lawyers to your firm.
17:04 It’s under an hour It’s under an hour
17:08 and it’s very affordable.
17:10 You’re talking like software as a service pricing.
17:12 And so I think the other revolution going on here is that this entire stack of a dozen different people, experts in trade, I built an agent at the other day that’s an expert in energy regulatory
17:26 policy for companies under 100 million in revenue working with a special trade agreement of the Philippines.
17:33 To find an attorney who has that expertise would cost tens of thousands of dollars to develop it if they don’t have it, it would cost hundreds of thousands of dollars.
17:43 The AI, which is not always right, it took about two minutes to
17:50 build.
17:50 And so that is just a real transformative moment when we can get first draft thinking done by all of these different agents and it’s just a fraction of a price.
18:03 If you wanna simulate how a trial judge is gonna operate with a real other attorney you know, many hours, many thousands of dollars, whereas with the AI, once it’s set up, it’s a dollar worth of
18:15 compute.
18:16 Yeah, so let me get your thoughts on, so this is a revolution on scale, right?
18:20 You now have the ability to, you know, in minutes create a bot, and I use that term loosely because I don’t know what word you would use,
18:31 whereas compared to an actual human who you have to house, feed, close in yada yada However, the big difference as we all know is that bot is not a human being who’s been trained.
18:43 So how do you ensure that that AI is trained or is giving you quality information?
18:52 Because you’ve also heard of some scary stories out there, right, where some lawyers are researching case law and the AI just made it up.
18:60 So how do you help make sure that you’re gonna get quality information?
19:06 That’s a great question And I think there’s – three answers that come to mind immediately.
19:11 The first is you have to invest in it, which means development and trial and error, right?
19:18 This is not a perfect out of the gate experience.
19:21 And part of the reason I built what I built, how I built it is it’s not designed for you to give your clients access to this, right?
19:30 It’s designed for you to use so that it’s still your judgment between the AI and the action.
19:38 And I think where you’ve heard the horror stories is attorneys just like copy and pasting stuff from chat GPT into a judge’s briefing or something like that.
19:48 And you definitely can’t do that.
19:50 Many more than you can copy and paste that at Google and just slap it in there.
19:54 You still have to do the work.
19:56 The second is there are methods by which you can train these things.
20:00 And again, that’s an investment.
20:02 Excuse me Sure.
20:06 The methods by which you train them – can range from anywhere from a couple hours of work to multi-year, multi-million dollar endeavors to get stuff exactly right.
20:19 So it’s not that they’re always wrong.
20:22 It’s that you have to put a significant amount of work in making sure they’re right.
20:26 And then the third thing I always point out to people is let’s make sure we’re comparing apples to apples, because they guarantee if we go through human attorney’s performance and say, was it 100
20:39 correct all of the time?
20:40 The answer is a hard no, right?
20:42 There’s absolutely no way.
20:44 So I don’t think it’s the right choice to say, is it better, is it more reliable than humans for a fraction of the cost?
20:52 I tend to think of it,
20:55 is it 80 of the value for 1 of the cost?
20:59 And that tends to be true.
21:01 They tend to be about 80 right and 1 or less of the
21:06 cost so you have to know what you’re buying.
21:09 right?
21:09 If you expect 100 percent, you’ll fail.
21:12 Now, that being said, in the next three to five years, we’ll see extraordinary technical progress where those problems like the nightmare scenarios or just hallucinating wrong information, they’re
21:25 going to go down dramatically.
21:29 And so I think the people who should be adopting this technology now are the ones who want to be doing the best data three to five years from now.
21:39 Yeah, it was a good point.
21:40 So one of those people might be people like me, right?
21:42 Lawyers and mediators.
21:44 And I was referring, I was referencing to you earlier before we started recording about how one thing that’s picking up steam is the idea of mediators learning to become better mediators by doing
21:56 simulations with LLMs or ChatGPT particularly, and where you ask it to roleplay the two disputing parties and you will be the mediator and you will type into
22:10 it a hypothetical scenario and it will role play with you the voices of those two individuals and then you can decide what you’re going to tell them.
22:22 And so that helps you, instead of having to go and do actual real life
22:28 simulations and role play, you can actually just use a machine.
22:31 And so these kind of simulations, I guess the sky’s a limit, right?
22:35 Because from your list, that Atlas was suggesting it can role play as a judge, it can role play as a juror, which would be kind of interesting.
22:44 I did never even thought about that one.
22:46 And I’m trying to imagine how, actually, because even that’s confidential, you would never be privy to what a juror would actually say.
22:54 Well,
22:57 don’t tell me more, tell me more about the simulation aspect.
22:59 Well, we can actually
23:02 infer what your likely jurors are going to be.
23:05 This is one of the superpowers of these large language models.
23:08 And I like to playfully think of it as time travel, but they’re really good at knowing how a person will behave given a certain set of circumstances.
23:20 So the circumstances might be
23:24 that the person is a middle-aged Asian-American woman in Beverly Hills who drives BMW.
23:32 We can actually forecast what that person’s likely to eat, drink, and buy, and where they’re likely to shop based on that information.
23:40 We can also go to the level of saying, hey, if you’re in a jury trial for larceny in the greater Houston area in this district, what is the most likely jury composition statistically?
23:56 And I would suspect that there’s no legal restriction on doing that, right?
24:02 I can’t, because you’re not trying to violate the privacy that ties to what I think is a really important point here, which is there’s an abstraction between synthetic role play,
24:12 which is storytelling and
24:17 simulating real people, which it probably has some negative legal connotation.
24:21 So
24:23 what’s great about this is you say, hey, was your law firm pretending to be so-and-so?
24:30 You think, no,
24:32 we just had a character that was like that?
24:36 Gotcha, gotcha.
24:37 You know, we sometimes have a cough spells as well, but we’re recording, so it’s okay, man.
24:42 I’m sorry, yeah, unfortunate timing.
24:45 Yeah, it’s okay.
24:46 No, this information is interesting.
24:48 And so we talked about simulations and role-playing and even in the mediation world.
24:56 A question I have for you here.
24:58 So these tools that you’re developing, they’ve been trained on a lot of data, right?
25:02 and for people who.
25:04 who don’t know what LLMs are, large language models, and you can explain a little bit more into it.
25:09 But it’s how I say it, supercomputers, we get a lot of data, compress it, and then outcomes of Freakonomics, which the book years ago about trends and things that they can predict just based on
25:23 massive data and information.
25:25 And so now we’re living in Freakonomics times.
25:27 And so what about building AI tools that are based off scraped online data, are there protections.
25:34 So last month, for example, the New York Times sued OpenAI and Microsoft for copyright infringement, alleging that they unauthorizedly used published work to train ChatGPT. And this is the first
25:49 of its kind kind of case, where you have a company like the New York Times saying, hey, stop using
25:57 our information, our published work, to train information, to train a chatbot basically.
26:04 So what about for people like you, or for developers like you who are in the business of training robots, basically, or training AI tools?
26:16 Are there risks we should be concerned about here?
26:19 Or what about copyright infringement?
26:20 Like how far do we go, maybe taking other people’s work and then using it to train another object to do another job?
26:32 Yeah, so that’s a great question.
26:35 And I think that’s something on everybody’s mind.
26:37 And I would break it into a couple of parts of the technological aspect, which is probably quite separate from the philosophical aspect.
26:49 Because
26:51 again, my personal belief is that these are semantic life forms, right?
26:55 And so semantic just being about the structure of words and the meaning of words, their life forms that exist in a word state.
27:06 In that way, it’s very hard to say that you shouldn’t be allowed to read The New York Times because if you read it and then you use some of that information, it was technically their information.
27:20 So our entire civil legal structure is predicated on the notion that if you buy the New York Times and you read it, and then you tell me what you read in there, that’s legally protected, right?
27:33 They’re in the business of giving you that capacity.
27:36 If we think of media as a tool instead of a piece of content, you know, that’s there, the Wall Street Journal is giving you information you want to speak to your colleagues about.
27:48 And once it becomes in your mind, they don’t own it.
27:52 Now, the latest AI is large language models operate in a way that’s very similar to a human brain.
27:60 They operate in these sort of electrical networks and
28:04 I think you can make the argument all day long that they are transforming that information, and even if they quote it, it’s still a process of transformation.
28:14 However, it’s complicated.
28:16 And so I’d say there’s three important, the two more important factors rather, which is one,
28:26 no matter what the current philosophy is, there is an astounding amount of venture capital going into changing the law So OpenAI, for example, has a shield fund.
28:37 So if the New York Times comes to me as an entrepreneur using OpenAI services, and they say, hey, we’re suing you, Sean, OpenAI by their terms of service will pay the legal defense and handle the
28:49 whole thing.
28:50 So I think that’s also the reality is like, yes, I understand why people would be upset and why copyright’s important.
28:59 And I used to work as an artist benefited from these rules.
29:04 And, you know, it’s always up for debate whether or not a law will change when a hundred billion dollars of venture capital is pointed at it.
29:12 And if you look at Uber, for example, Uber changed American transportation law, you know.
29:21 Behind the scenes, it’s pretty open.
29:22 There’s books about it, you know, about their concerted efforts to almost intentionally know that something maybe wasn’t allowed and that they would win the dispute over whether or not it was
29:34 allowed through the force of capital.
29:37 So I think that’s important.
29:38 And then the third thing is what I’d say, the most exciting to me is what’s called synthetic data.
29:43 So
29:45 if we build an application that role plays as an entire law firm and we have it simulate 200 or a 1000 trials where the paralegal says this
29:60 Case attorney says this, then the specialist on this kind of law chimes in, it says this.
30:05 And it’s hundreds and hundreds of answers across hundreds and hundreds of simulations.
30:11 We now have a new data set.
30:13 And that data set is the aggregate of all the output.
30:17 And that’s what’s called synthetic data.
30:20 And so what I think will happen for legal reasons and technical reasons is we’ll move from first order training data like New York Times articles and all that sort of stuff into training on the data
30:33 rendered by the engines trained on that first order of data.
30:37 Now, I know that’s sort of getting into the technical weeds a little bit, but it’s fundamentally important because you cannot claim that that is copyright infringement.
30:48 That synthetic data is just as valid for training these models.
30:54 And it’s definitely not the original source content By definition, it’s not the source content.
30:60 So that will be increasingly important.
31:03 And that’s part
31:06 of what I’ve written about and part of where this goes.
31:09 I think a good way to look at it is that the data you render through these simulations is a source of intellectual property and proprietary knowledge for the law firms that started doing this.
31:22 Because three years from now you can say, Well, we’ve simulated over 100, 000 mediations in the greater Houston area, and we can see the trends, and we can see that if you’re a upper-middle-class
31:40 mixed-race family who owns two commercial buildings and three houses with two rentals,
31:47 the most likely outcome in these kind of negotiations is X
31:52 And that becomes its own.
31:54 own asset and its own power source and something you can write white papers about and that kind of stuff.
31:59 So I think what we’ll see to your point is the rise of legal simulation experts where real attorneys are working with advanced AIs to try to forecast how high value deals, high tension deals,
32:16 very important deals, Supreme Court testaments, you know, where people get very good at this of using AI agents to try to role play out how things might go.
32:26 And that becomes its own niche expertise that I bet becomes very valuable because as soon as you can prove that your simulations are 51 accurate or more, a lot of savvy business owners will start
32:38 trusting you more than the next guy.
32:41 Yeah, no, it’s all interesting because I’m thinking about, you know, this word truth.
32:45 And so far, we’re talking about hypotheticals, right?
32:48 Meaning it doesn’t necessarily need to be true.
32:50 These are things that may be, and we’re talking about probabilities and percentages.
32:54 and likelihood, but what about whether or not a thing is true in of itself?
32:59 ‘Cause a lot of times, I think for a lot of people, that’s really what is important for them to worry about in this whole discussion, like, is it true or not?
33:09 It’s like, for example, you just talked about, I guess, something I never thought about before, a second-tiered level of knowledge, right?
33:15 New York Times article might be the first tier, and then ChatGPT goes in and produces something else, and that’s another, you said, I think, the word you use was synthetic knowledge, or
33:26 simulated knowledge, what did you say? Synthetic
33:28 So then, but then
33:31 you also said that that could, in itself, become
33:36 copyrighted itself, potentially, right?
33:40 Well, then, couldn’t somebody else come in with a third tier of knowledge and try to say, well, we’re gonna create new simulations off that second tier?
33:48 Where does it stop?
33:52 So I guess it’s a complicated thing to me, but if you don’t, if you address it at the New York Times level, aren’t you going to have the same problem, higher up, like when does anyone ultimately
34:04 own anything when it comes to intellectual property?
34:07 Well, I’ll take it actually a step further before answering that, which is to say what happens when an AI can be told the name of the journalist and the subject matter.
34:20 And the synthetic output precisely matches their real output.
34:26 That’s not a hypothetical.
34:28 That’s not a joke.
34:29 It’s not sci-fi.
34:30 That’s very much attainable in the near future.
34:33 So it’s a
34:36 better, to me, I would think of it like,
34:41 what are we going to do when an AI can accurately write and talk and think, at least as well as us.
34:50 in our style and in our voice and what does the human experience become about when we cross that threshold?
34:59 And that’s not a trivial question.
35:03 In fact, I can say very confidently if you were to give me
35:08 someone who’s been practicing for 30 years, all of their case material and all of their written
35:16 records so far and all their work documents and stuff, the AI version of that person over email would be indiscernible.
35:25 You wouldn’t know if it was synthetic MacArthur or someone else.
35:33 And so that’s today.
35:35 So I think that, yeah, you’re asking the right question, which is, well, if the synthetic data can produce more synthetic data, which can produce more synthetic data,
35:46 why bother?
35:47 you’re just on a wild goose chase, trying to say who’s responsible for the copy right there.
35:53 And then we’re already to the point and this often gets glossed over.
35:57 We’re already to the point that most of the knowledge that the world possesses is already encoded in open source language models.
36:08 So no matter what happens, from this point forward, probably 90 plus of the world’s knowledge is already there And it’s already widely distributed.
36:18 Millions of developers already have a copy.
36:21 The genie doesn’t go back in the bottle.
36:23 Yeah, so let’s talk about that, open source.
36:25 All right, so the way I understand open source is any software that a developer has released publicly, that other developers can download, modify, distribute without paying license fees.
36:36 So I imagine the opposite of what Microsoft’s been doing, they own Windows, they own Office, and they license it to computer manufacturers. opposite of Linux, for example, or.
36:47 Libre office with who are open source, so but what about in the AI world right so what’s the setup. We hear about open AI, but open AI is actually not that open right, there’s the OpenAI
37:03 Inc, which is not for profit
37:06 and OpenAI Global, LLC, that’s actually a company for profit; they make money, I mean I pay ChatGPT turbo you know every month and so what about that world, when it comes to all this being open
37:20 source or closed source um what does it mean for you and people like you developers who are trying to make a living but they don’t want to give away the kitchen sink. And free speech.
37:32 well so so there’s a couple ways to look at open source they align I think a lot of your audience might appreciate is is people say open source software is free isn’t freedom not free isn’t beer and
37:45 and what they mean is um open source software publishes at least some portion of the source code so that other developers not only can use it but can trust it.
37:55 So I think
37:57 to tie a point you were talking about earlier, open source has a lot to do with trust, which is a lot of the world’s best security software is much to people’s surprise, totally open source.
38:08 Any hacker can go see how the security software works because it’s not the architecture of the software that keeps it protected, it’s the use of the software.
38:17 And
38:19 making the code for encryption software open source
38:23 means that it’s harder to hack it because more developers have looked at it to see the problems.
38:31 If it were closed source, it’s very easy to find the thing that the 12 people who have access to the source code missed.
38:38 If it’s open source and tens of thousands of developers It looked at it.
38:42 It’s very hard to find the little things that were missed.
38:52 So that’s why Linux is preferred by most banks and most government institutions all use Linux ’cause they don’t want to use a lot of cases they don’t want to use source code that only the employees
39:00 that Microsoft can see which is how word and that kind of thing works.
39:04 As it relates to open source in this world of large language models and artificial intelligence there’s two reasons that open source are really important One is that it doesn’t undermine the business
39:17 model.
39:17 So I’m about to open source all the software I’ve spent months building thousands of lines complex code that takes decades of experience to build.
39:27 And I’ve been totally put it out there.
39:28 So any 18 year old who knows how to write good code can just go grab it.
39:32 The reason is that the business is in the hosting, the customization, the actual compute that needs to be done, the API
39:41 calls.
39:43 as well as all the customization.
39:44 And if you run a middle-sized law firm in Houston, you’re not gonna jump on and configure all the servers and manage all the security settings.
39:54 So the money’s actually still there to be made.
39:57 Or the other side of the open source component is that Facebook or meta has really disrupted open AI’s dominance in the market by doing an interesting thing which is to completely open source
40:13 competitive models.
40:15 And there’s a lot of reasons why, but what it does is it enables people like me to have confidence that we’ll have access to this technology, whether or not open AI continues to be a good company,
40:30 whether or not their software continues to advance.
40:33 And it means that we can envision a future where, and I think this is where it gets really exciting, is, I’m sure you know this.
40:41 Most people can’t afford a lawyer.
40:44 Most people are too afraid to ask an attorney.
40:46 Most people are not there unless they’re really desperate and there’s a lot of times they should ask and there’s a lot of people who need access to counsel who can’t afford it and aren’t gonna get it.
40:60 The promise is that an open source large language model can fit on a commodity phone and even if it takes a while to run, it can answer any kind of legal questions.
41:13 So I think we can apply that same logic to education.
41:17 Like we are very soon to be able, I worked on a project to build an agent to customize curriculum for kids based on where they are and their learning journey.
41:27 And it costs about a dollar a year to fully customize the curriculum for a child, maybe 10 at most.
41:35 There’s no teacher that can customize all the curriculum child for the equivalent of a dollar a year.
41:41 and that doesn’t undermine the teacher that aids the teacher.
41:44 So as we look at it, to tie it all together, the reason learning how to simulate legal scenarios right now is important, is that it’s a hint of what’s to come, which is that we’re likely to
42:00 simulate most interactions before we do them soon, right?
42:04 We’re gonna wanna talk to the AI attorney before we talk to the real attorney We’re gonna wanna talk to the AI judge before we talk to the real judge.
42:12 Or even in many cases, we’re gonna wanna talk to the dare I say AI girlfriend before we talk to the real girlfriend.
42:21 Like, you know, whatever the use case is, we’re about to have just AI agents available for everything.
42:29 If you’re a gardener, you’re gonna be able to take photos of your garden and your AI will tell you, you know, how to fix the Afid problem on your strawberries Yeah, it’ll be ubiquitous.
42:41 So I think it’s very wise that you start paying attention to this now and certainly mediators have, like you said, a natural tie into role playing, which is really one of the superpowers.
42:50 And as a last point there, doesn’t require it to always be correct.
42:54 Right, if it hallucinates or it gets something wrong or doesn’t tell the truth, it’s only impacting a coaching exercise, not an executive function.
43:03 Yeah, well, this is why I like talking to people like you because you’re in the weeds actually working on the tech and you thought through or to the degree that you can, you’ve through all the
43:17 different ways this kind of tool can help you.
43:20 The way I think all this AI help me the most has been by making me think or realize things that I wouldn’t have thought up before because I’m only one human with one mind, right?
43:33 But if I can, if I ask ChatGPT, for example, to generate a list of X, Y, and Z, whatever the scenario might be,
43:41 I’m surprised by how many things, I’m like, wow, I didn’t think about that, right?
43:45 That’s the beauty, that’s the help.
43:47 And so every time we see, you give the teacher example, this will help the teacher not undermine them.
43:53 The lawyer example, this is not gonna take lawyers’ jobs away.
43:55 It’s going to help attorneys be even better.
43:57 And so anything we can think of,
44:01 that’s what we have to focus on.
44:03 And how can this actually help us do whatever it is better?
44:08 And like mediator, you can actually become a better mediator with now this assistant, making you test things out.
44:15 So at my desk, I have the, today when I came to the office, I have the Florida Bar News for the
44:25 Florida bar.
44:26 And there’s another article on regulations that they’re gonna be putting together on AI in the law.
44:33 And so just one little sentence of Veteran lawmaker is proposing an AI council that would promote
44:39 the technology and monitor its threats with the liberties, finances, livelihoods, and privacy of Florida citizens.
44:43 So a lot of bars across the state, Texas Bar, Florida Bar, we’re all getting on the same page to try to put regulations in place, to protect the public.
44:52 But at the same time, give guidelines on how we’re supposed to do this AI thing, because let’s admit it, all of us are new to this at the same exact moment, same exact time.
45:02 We don’t know where we’re going really with all of it.
45:05 But what do you say about regulation?
45:08 Too much, too little, do we need any of it?
45:10 Just what are your thoughts?
45:11 And there’s free speech thing as well.
45:13 I kind of wanted to hear your thoughts about how all this affect you as a developer and what you can say.
45:19 Yeah, well, I appreciate that.
45:22 And I certainly
45:25 am what you would call an Accelerationist, meaning that I think we are much safer speeding up to the development of AI than we are pretending
45:37 So, you know, I think if we were to just say, hey, let’s stop building
45:44 any, let’s make it all illegal right now.
45:46 You know, like it’s some kind of weaponry.
45:50 What would happen is only the bad actors would progress, right?
45:53 Some like North Korean lab would continue to build super powerful AI, and we wouldn’t have any AI to protect us.
46:02 So we can see this in the example of drones So drone swarms are a really scary use of AI, but because you can have very small drones that can sort of swarm a person or a car or anything and really be
46:19 even lethal, right?
46:21 So the only protection, I have a buddy who started a company here in LA, it’s now worth over a billion dollars, and what they do is they build an AI that controls a drone swarm that prevents other
46:32 drone swarms from attacking, because the only way you can do.
46:36 The only way you can prevent the bad guy drone storm is with a good guy drone swarm.
46:41 That’s not hyperbole.
46:42 That’s not some sort of like waiting in the gun rights logic.
46:47 I’m saying like by the laws of physics and computation, the only thing that can stop an AI driven drone swarm is an AI driven drone swarm.
46:56 No dude with the shotguns ever going to stop them.
46:59 Like that, so no matter how you feel about it or what you want to be the case, that is the case.
47:07 So then it becomes an issue I think of fundamental right.
47:11 And I’ll say the worst case scenario is that we say there’s only one company allowed to have this super powerful AI and that company has no public accountability and isn’t elected officials have no
47:25 control over it other than stock option type stuff.
47:29 That’s the worst case scenario.
47:31 That’s something that could potentially take over the entire stock market held in the hands of one company.
47:37 And as we know,
47:40 all companies leak digital files at this point, right?
47:43 I mean, if Google can’t keep 23andMe’s DNA data safe, nothing is safe, right?
47:49 And that was very well protected.
47:51 That’s the best security engineers in the world protecting the riskiest data that they controlled.
47:56 And there’s 20 million records released So if we live in this like post-leak world, it’s gonna get out anyway.
48:03 And that leads to this idea that expression through a large language model to me, I think is very much like free speech.
48:11 And you should have the right to own and possess models.
48:15 What you do with them should be different, you know, is regulated differently than ownership of them.
48:20 Certainly, gun laws are a great example of that.
48:25 You know, it’s legal to own a gun it’s totally illegal to shoot someone Um. it should be illegal for a large language model to pretend to be a judge, right?
48:36 Like, so I should clarify something, especially on a public forum here.
48:40 Never, ever, ever have I or will I make an AI that’s pretending to actually be a judge or pretending to actually be an attorney.
48:49 They’re always very clearly labeled.
48:51 I’m an AI, you know, and they are somewhat incapable of going out to like, you know, some third party and pretending to be that person.
49:04 So that is wrong and shouldn’t happen, but do you have the right to write a short story about a judge?
49:11 Yes, do you have the right to sit in a meeting in a boardroom and role play that person?
49:15 Absolutely, people do it all the time.
49:18 That’s half of the lobbying in Washington, DC, is that, right?
49:21 Do you have a right to write a white paper about that judge?
49:27 And so in that same way, you have the right to use a large language model to think through that person’s mindset.
49:34 And the reason to sort of conclude, the reason I think that’s so important, is that at least in America, we have a belief ingrained in our Constitution, especially by the first and second and many
49:49 of the amendments in the Constitution, that you can express yourself and that tools will not be held in private hands beyond a reasonable measure.
50:00 That’s why we have such a robust copyright and intellectual property law.
50:04 Even our real estate taxation scheme, it doesn’t allow you to just permanently hold onto something valuable and do nothing with it.
50:13 We have this idea that these are resources we have a right to, and I haven’t very much of that camp, but I hope your listeners will consider that when people talk about regulating the dangers of AI.
50:26 It may be, as is the case in many other industries, that,
50:31 you know, well, there’s the Latin phrase, right?
50:35 Who will watch the watchers themselves?
50:38 I won’t butcher the Latin, but I think it’s important anytime you’re reading, well, the bar needs to regulate these AI guys,
50:45 well, who’s doing the regulating?
50:48 And who do they work for and what are their incentives?
50:51 Because if you want every child in the world to be able to get a comprehensive education for a dollar a year, you shouldn’t over-regulate AI.
51:01 Yeah, so there’s so much good that can come from it.
51:03 I think people are naturally fearful of the bad that can come from it.
51:07 Like the whole Taylor Swift controversy where people are creating deep fakes of her using AI and she’s now
51:16 going after them legally.
51:18 And so there’s so much evil that can be done with these tools.
51:21 And so we want to restrict at the same time, do you have a right to possess it yourself and then use it for your own nefarious purposes as long as it doesn’t go out of your own computer, I guess?
51:37 So this question of this debate between
51:40 the possession of AI and using it for your own purposes versus the action that you do with it and whether or not they’re gonna harm other people, I think I guess that’s the distinction we need to be
51:53 clear on And the example of the gun that you brought up is a good example.
51:58 And so possession is not necessarily the same thing as use.
52:02 And but
52:05 the Bar literally, in either the Texas bar or the Florida bar recently were having some forums where tech experts were coming together to discuss the limitations that could be applied to lawyers,
52:20 including one being a requirement, that lawyers have to tell and inform their clients whether or not they’re using AI in order to produce some of the work.
52:30 And I was thinking, you know what, isn’t that going a little far?
52:32 Because what if you have a client who’s like, no, I don’t want you to use AI when you produce my drafts or my documentation.
52:42 I mean, where do we draw the line here?
52:44 So I’m all about, I’m like you, I think that we got the advantage of this stuff and the more open the better and the more transparency the better.
52:52 You know, we’ll see where this goes.
52:55 You know, but Sean, let me let you close us out.
52:58 What are there any other takeaways you think people should know from our conversation today?
53:03 What’s, I can see this is really important to you.
53:05 You’re really passionate about it when you speak about it.
53:08 And so what would be some, something you would leave people with as we in our talk?
53:15 You know, I’ll tell you a story from my childhood, growing up in Golden Gate outside of Naples where we both grew up, which is my mom was a special education teacher.
53:23 So she had this classroom full of students with disabilities and special needs.
53:29 And part of the reason I’ve always been so passionate about technology is there’s this young woman named Elizabeth.
53:34 And this was the ’80s.
53:35 So there were like Apple, 2GS computers and there’s like Windows, or there’s DOS computers.
53:42 ’80s, early ’90s.
53:44 And there’s something called assistive devices or paddle switches.
53:48 And for disabled people, what they do is it’s just a single switch connected to the computer, kind of like a big arcade button type thing, and you can hold it and release it.
53:59 And my mom had gotten grant money to get one of these things for Elizabeth, who was a quadriplegic, but she could move her head this way.
54:09 And the thing about Elizabeth was she’d been in an accident, I think, in a car wreck, and she was not neurologically disabled.
54:16 So it was pretty hard for her in the ’80s because she was of sort of standard.
54:21 maybe even higher than average intelligence, but couldn’t move and couldn’t speak or anything so it was often in a class with mentally disabled students as well.
54:31 So this paddle switch came and it wouldn’t work and nobody could figure out how to get it to work.
54:36 And the county’s 1 IT person said it’ll be two months before I can get out there.
54:41 But I was already super into computers and figured out how to basically kind of hack it, how to sort of modify the device drivers to get it to work And then I was there when we hooked the paddle
54:54 switch up to Elizabeth’s wheelchair.
54:56 And there’s this interface with a bunch of boxes that have categories like greetings, food, bathroom.
55:03 And she could hold the switch and it would hover over different ones for a few seconds and you release it and go to the next level.
55:12 And she said, thank you to me.
55:15 And I still get choked up about it because she kept us talking for it.
55:21 two hours straight, she just kept saying stuff that she had wanted to say for years.
55:27 And so, yeah, sorry, I’m not like emotional about it, but
55:32 if we look at the role computers have played in our society, it has done nothing, but enable more people to do more things better.
55:42 And so yes, we want reasonable regulation, but we have to remember that it was databases and silicon chips that gave Elizabeth the ability to talk.
55:52 And we have to remember that it’s the internet that has in large part opened up the floodgates of like diversity and inclusion into universities and made it so anybody with a YouTube access can learn
56:05 how to do all sorts of skills from soldering to engineering to whatever it may be.
56:10 The technology has done all of this and it’s technology that solves cancer It’s
56:17 technology that has done all these things, sir.
56:20 I would go so far as to leave your audience here sophisticated legal professionals with this idea that one day we will find ourselves in training rights for non-human forms of intelligence because we
56:36 will come to see them
56:40 as very much like us, which is something that can be very scary.
56:44 There are some very scary humans out there, but generally is part of the universe, which is a very curious place that seems to be trying to get better and better and more compassionate and more just,
56:58 at least in our lifetimes it certainly has.
57:00 And so I just say, hey, let’s think about, yes, protecting ourselves, but also what if we could ask the question, if this is a new form of life, how might we protect it in a way that we have a
57:12 lot of dignity and pride in how we handle the scenario?
57:17 They’re very fascinating.
57:18 And I say that a lot, but this one really is, makes you think, you know, it’s not a science fiction anymore.
57:25 And we’re living in incredible times.
57:27 And so thank you so much for your contribution to all of it.
57:31 All right.
57:31 Sean Mcdonald of Semantic-life. com.
57:34 And Sean, if people want to get a hold of you, the best way they can reach out to you is to your website, Semantic-life. com?
57:40 Yes, it’s Semantic-Life. com.
57:43 And you can, I’ll send you to your email addresses.
57:45 You can email me, which is sean semantic-life. com Or you can email Atlas, the AI agent.
57:50 You can just talk to him, which is frankly a little bit more fun.
57:55 Too funny.
57:56 All right, man.
57:57 All right, well, that’s great.
57:58 Okay, let’s stop the recording.
57:60 Thanks so much, everybody.