-
Watch the episode
About Bryan Ilg and Michael Paik
In this episode of the Managing Partners podcast, Kevin Daisey hosts Bryan Ilg and Michael Paik from Babl AI to discuss the impact of artificial intelligence on the legal profession. They explore the importance of AI governance, compliance, and the unique challenges lawyers face when implementing AI technologies. The conversation highlights Babl’s educational offerings aimed at helping legal professionals navigate the complexities of AI, as well as practical applications and risk management strategies for law firms. The episode concludes with insights on the future of AI in legal practice and the necessity for lawyers to adapt to this evolving landscape.
Takeaways:
- AI is becoming ubiquitous in various sectors, including law.
- Babl AI focuses on governance and compliance in AI usage.
- Lawyers need to understand the risks associated with AI.
- AI can enhance efficiency in legal workflows.
- Vendor management is crucial when adopting AI technologies.
- Legal professionals must be educated on AI governance frameworks.
- AI can help lawyers manage client relationships more effectively.
- Understanding AI’s stochastic nature is essential for legal practice.
- Babl offers specialized courses for legal professionals on AI.
- The future of law will increasingly involve AI technologies.
Episode Transcript:
Kevin Daisey (00:31) All right, what's up everyone? We're recording another episode of the Managing Partners Podcast. I got a unique crew on here with me today. I've known Bryan for a little while, actually through some other business. And he has Michael on here today. And we're gonna talk about AI governance and not a topic we've covered. AI is a hot topic, obviously. We talk about all the time. I'm actually giving a talk tomorrow on AI search. So AI is just everywhere right now. And so this is something maybe that might be new to you. And I'm excited to have them kind of talk about what they do and the importance of it. So they're with Babl. That's B-A-B-L. Babl AI. So you can go to babl.ai to check them out. But I'm have them kind of talk about what Babl is, their backgrounds and we're going to dive into this governance talk and interested to see where we go with it. So, Bryan, Michael, welcome to the show. Bryan Ilg (01:25) Thanks for having us, Kevin. Excited to be here, for sure. And yeah, excited to talk a little bit about what Babl does, AI governance, and also how it impacts the legal space, the compliance space. That's a big, big topic that we deal with. And Michael's our general counsel and has just recently created a... AI governance certificate for legal professionals, which is aimed at really opening the eyes of legal professionals about all the unique fun stuff that AI creates for lawyers to solve and figure out and, you know, with respect to the unique risks and challenges of AI. So we're excited to talk a little bit about that today. Kevin Daisey (02:09) Yeah. Um, and I'll Michael, I'll you go in just a second, but you all have also for our, listeners, which I really appreciate, uh, if you go to babl.ai, you guys have a discount code for anyone listening to this episode. Uh, or if you hear it in the future that this code is only given to you all. So, uh, you can tell them that. then, uh, we'll have Michael kind of give us his background because he's it's, uh, extensive and, can't wait to get talking to him. So. Array Digital (02:43) Thank you for tuning into the show today. I have taken things to the next level and I've started the Managing Partners Mastermind. We're a peer group of owners looking for connection, clarity, and growth strategies. So if you're looking to grow your law firm and not do it alone, please consider joining the group. Spots are limited, so I ask for anyone to reach out to me directly through LinkedIn and we can set up a one-on-one call to make sure it's a fit. Now back to the show. Bryan Ilg (03:13) Yeah, perfect. Yeah. So if you go to babl.ai, B-A-B-L.ai, you can go to our courses and use the coupon code LAW30 for a 30 % discount on any of the education that we have on the site. Like I said, we have the legal professional certificate, but we have lots of other things with AI auditing and all those things. And maybe it's good to just talk a little bit about what Babl is before I kick it over to Michael. So, and I'm a... Kevin Daisey (03:39) Thank Bryan Ilg (03:40) the Chief Sales Officer, so helping get our message out. We are The Brown Algorithmic Bias Lab. So we started as a research firm back in 2018, studying the best way to map, measure, manage, and govern AI. Some of our research was contributed to the NIST AI Risk Management Framework. We were one of the... initial consortium members there. We've pride ourselves on our research. We have an awesome research team that's constantly researching edge case for AI use. And we've taken that research into practice and we actually are out providing audit assurance for AI systems. We've done lots in the HR space. We've done lots with kind of edge cases, autonomous vehicles, facial recognition, really any way that AI is You know, being deployed, we have a systematic approach to break it down, spot and identify risks within the governance, within the testing, any sort of bias, what's not being looked at, where does the desired outcome of that AI system get off track? So we can kind of start you with this point in time and you want to get to this point in time and we can help you measure and manage your way towards that. And that's our special unique gift. And we're hoping to explain how lawyers should be really aware of within that process. The risks that businesses face by implementing AI into core, you know, profit models, revenue streams, business processes, business decisioning and judgments, which were normally left up to humans. And now we're codifying them and what risks those bring into the world and how governance really impacts that. that's the best I could do to set you up, Michael. Kevin Daisey (05:18) Yeah. Well, I'll just add before Michael goes, we'll never, we're never going to let Michael talk. Okay. he's just the guy behind the curtain. He's like the Wizard of Oz over there. but, ⁓ Bryan Ilg (05:18) go for it. Michael Paik (05:20) Thanks, Frank. Bryan Ilg (05:24) That's the ball, Michael Paik (05:23) ha ha. The time is being charged though, so you Bryan Ilg (05:31) Yes, there we go. Kevin Daisey (05:32) well, you know, I get to legal conferences. I do this podcast. I have lawyer clients all around the country and all lawyers and firms are being just pushed and told. Use AI, implement it here, there, everywhere. and so it's, you know, it's just being pushed quite often. Some companies just started, some are new. Some of the, some of these lawyers are doing it themselves using. You know, OpenAI or ChatGPT. And so, you know, I just lay that out there before we dive in. So, Michael, since you're charging me by the hour, let's go ahead and have you introduce yourself. Michael Paik (06:00) Hello. So I'm Michael Paik I'm the General Counsel for Babl AI and I'm actually also an AI auditor. And that's how I actually got into the firm. I took the courses and they qualified and I like these guys so much I joined the company. So I've been with Babl a little bit under a year, but I've been practicing for over 25 years. And my background is, you I started off in New York on Wall Street, kind of a corporate practice, then went to Silicon Valley. And then I returned back to Korea where I'm residing now about 20 years ago. And at the time there wasn't that much work in venture capital, which is what I was doing in Silicon Valley. So I kind of pivoted into building out risk management systems for large conglomerates, industrials. So tires, steel, shipbuilding, large companies with many public companies in their system operating worldwide. And similar to some of these startups, their compliance infrastructures were inadequate. That's what I focused my time on. And then a couple of years ago, as ChatGPT rolled out, I started using it for myself and then building AI apps and then getting interested in kind of the compliance around AI usage. And that's how I found Babl and ended up joining them. My previous experience with AI is 34 years ago, I studied linear programming because before I went to law school I did operations research. And, you know, in practice I had an expert systems client and then I was on the board for a computer vision company for many years. So I had some experience with it but, you know, it really kind of picked up a couple of years ago just like with everybody else. Kevin Daisey (07:39) Yeah, yeah, for sure. I mean, we, uh, we know it's been around for a long time. I just think I feel like most people think it just, just came around in the last couple of years. So ChatGPT has really blown things up. Bryan Ilg (07:53) Yeah, I think. Michael Paik (07:54) Yeah, and the algorithms are everywhere now with social media and shopping, Amazon. It's pervasive. It's just that we didn't think about it in this way pretty much until ChatGPT crystallized it for many of us. Kevin Daisey (08:08) Did Bryan, you had something to add to that? Sorry. Bryan Ilg (08:08) Yeah, definitely. I was going to say, yeah, I think I do a lot of reading in this space, but yeah, most people are like, AI has been here for like, you know, since ChatGPT, but proven, I think back in like the 1930s is when this was mathematically proven that you could have automated decisioning through a machine. So it's been around for quite some time and it's like most elementary level, but yeah, what... how it's being used in business today is really disruptive to everything since ChatGPT. So, you know, and most people aren't really aware of all of those risks or compliance challenges and those types of things. So it's certainly created a whole bunch of new things. And we see all the regulations that are constantly, you know, being discussed and voted on. And, you know, is it this country, this state, this region? And that's why we talk with a lot of lawyers and legal professionals about how does it impact? Because we're talking about AI, what is it and how do you break it down? How do you even evaluate that for legal compliance perspectives? How does it impact? So that's kind of the angle specific to law firms where we're a lot of different people because governance I think generally is best with a diverse group. So, legal should be a part of every single organization's, you know, governance team. They should be weighing in and helping there. And that's where, if you understand how AI governance works generally, and some of the frameworks and that's what we teach in the program. Then, you know, you can start to think about it from a legal perspective, which is where Michael comes in and really adds, just a ton of value there in that, certification. But yeah, we've been keynoting a different. Like the DC Bar Association, we've keynoted that a couple of years running, our CEO is there and he works with regulators globally. So, you know, we're, constantly talking about regulation. Kevin Daisey (09:47) Okay. Michael Paik (09:56) And for lawyers in particular, I've been practicing long enough so that I actually remember when email got rolled out for lawyers to use. we used to do paper distributions, mailing everything, hard copies. And back in maybe the 98, 99, larger law firms started using email and then approving it for distribution. At that time, everybody was worried about confidentiality and what could go wrong. And I think we're at that stage now with AI. Bryan Ilg (09:56) fun stuff. Kevin Daisey (10:05) you Michael Paik (10:23) And there is a lot to worry about. And that's kind of why we created this course for lawyers in particular to kind of address their special needs and to help them get a better understanding when they're working with clients who are deploying AI. There's a role for lawyers, both at law firms and definitely in-house in leading the charge on kind of managing these risks. Part of what we're doing is trying to get attorneys comfortable with AI and confident about opining on AI in their professional capacity, but also competent in using AI for their own work. And I know you have a diverse audience here, but whether you're a solo practitioner or a law firm partner or you're in-house, there are many uses of AI that know, remain to be explored beyond, you know, the headlines in, misciting cases in litigation. That, that's really, as far as I'm concerned, that that's actually kind of a no brainer. know, you got to prove your stuff. ⁓ But the vast majority of use cases for AI for law firms, whether you're solo law firm or in-house, I think are in knowledge management and operations in managing workflows. Kevin Daisey (11:31) Yeah. Michael Paik (11:43) basically managing your business and creating kind of slack and efficiencies in your time so that you can bill. And if you're in-house, then you can improve the risk management systems and maybe just go home early. So this type of, I appreciate the articles highlighting these bad cases of made up citations and litigation. Kevin Daisey (11:45) Mm-hmm. Michael Paik (12:05) But I think we don't need to really belabor it here. I think most of your audience is well aware of the risks and it's a professional obligation to manage your work product. And that, you can skip it by using AI. Kevin Daisey (12:08) Yeah, yeah. No, yeah, 100%. Yeah, like hallucinations and people using it for that kind of stuff. I think that's, yeah. But maybe that scares some people from utilizing it for other things like client communication and, you know, those kinds of efficiencies that you can create. know, again, I do have a diverse audience, so I appreciate that. So say if you have a PI firm on the listening, they have intake, they have... You know, trying to bring in potential leads, trying to turn them into consultations and then turn them into a client that they can set up on contingency. You all the work that's involved there, right? Communication, follow-ups, client journey and processes, you know, steps throughout the case history and an update in the client. There's so much application for AI in those areas. And there's a lot of companies taking advantage of that too. So, and then I see a lot of lawyers. I think I mentioned this before recording lawyers creating their own AI products based on their own need or, you know, they say, Hey, I see a hole here. I see something we could fill. and so I'm seeing all kinds of creative things that are happening there too. but yeah, not just using it to, you know, do the casework for you. I think there's obviously systems out there and companies that are doing that, that are saying, you know, it's safe to use and blah, But I think, you know, people are still more concerned about that piece of it. But just using it in your day-to-day operations and processes, feel like that's been more adopted, but I would assume there's still risks associated with that and things that they may not know. Bryan Ilg (13:36) Yeah. Michael Paik (13:46) Well, know, fundamentally, and this is, you know, I think well understood, but not always remembered. You know, this technology, particularly large language models, it's stochastic. And basically that means there's a probability distribution associated with its output. So the easiest way that I've found to explain it is there's actually a cartoon called Adventure Time. Bryan Ilg (13:48) 100 % right. Yeah. Michael Paik (14:10) There was a monster in this cartoon called a demon cat who had approximate knowledge of many things. And so this is what you're dealing with. You're dealing with a tool that has approximate knowledge of a lot of things, probably. So the trick is to corral that cat and to get the output that you want for your particular workflow. But that demon cat will raise its head when you least expect it. So keeping that in mind in your workflows is very helpful. you know, obviously lawyers are meticulous and we review our work, but, you know, there are a lot of time constraints and it's easy to sometimes, you know, overlook things in the midst of a lot of workflow. But this steam and cat, it's going to pop up. And that's the key to understanding, I think, you know, how AI works and then what you can do to manage it. And that's where these systems come in, right? Whether it's governance, risk management, and really guardrails around your work processes. Kevin Daisey (15:09) That's a good, a good analogy. I guess I like that. yeah. Yeah. I feel like, you know, I, I didn't know lawyers that, they kind of use it every day all day and at least for personal and maybe some business. and then I know some that are just like completely like not, I don't even know what, how to use it. don't, I don't use it. I don't want to, I'm scared of it. so, you know, I'd like to hope my audience is on the. the side of using it to their advantage. And I just go back to the fact that, know, there's all these companies out there that offer products built on AI and, and how would, know, if I, I have three AI companies that I'm hiring for this and that plugging in here and there, like what's the risk of the firm, you know, the understanding that they should have of those products. and what they're doing and what they could do or potential risk with communicating say to a client or things that maybe appear harmless or controlled. But what should a law firm owner really be going, okay, how am I assessing these products? How do I make sure I understand what my risk might be? Michael Paik (16:09) Yeah, that's a really good question. think to get into the nitty gritty of it, whether you're in-house and you're doing vendor management or you're buying software or services for your law firm or you're subscribing to this stuff as a solo, there's certain questions that you need to ask. Let's start with the demon cat. So all of these products are wrapped on top of large language models, whether they're put up by OpenAI or Claude or otherwise. And all of those large language models have a version of this Demon Cat. And if you look at the underlying kind of agreements, because we're talking to attorneys here, if you look at the agreements that underlie those offerings by these large language model providers, they disclaim very clearly accuracy. So that's in writing, it's in the contract, and that's what your vendor is wrapping their services on. Kevin Daisey (16:55) you Michael Paik (16:58) you know, since the model provider has disclaimed it, where does it go? Is it, you know, the service provider that's going to hold that risk or are they going to most likely disclaim it again to you? So, you know, you have to read your agreement and to understand the kind of parameters of what's being offered and what their warranties are and what you can expect in terms of accuracy. A lot of the legal kind of LLMs or legal kind of services they take great pains to talk a lot about, we've reduced hallucinations, we will abide confidentiality, there's no training on your data, and that's important. But still, that doesn't get rid of the demon cat. And so we need to think about what that means for your workflow, how you use it, whether you need to buy additional insurance to cover disasters that may come about because of your use of of this particular service. So buyer beware. And so in terms of vendor management, the contracting process is important, as is your due diligence. And we advise both in-house teams and others on this. But it really starts on a meta level, not just at that one contract. So this is what Bryan was talking about with regard to governance. What are you trying to do with AI? And what is your kind of entity or law firms approach to managing the risks and opportunities related to AI? And the recommendation we make is you need management systems around this, whether it's something from ISO, which is 42001, which is kind of the standard that's coming about, whether you're a NIST Risk Management Framework, or if you're active in the EU, you have much more onerous conditions because of the EU AI Act that you need to be looking at. It depends where you are, but you do need a risk management system. So that's what lies between kind of the governance, your top level, board level, owner level decision on whether and how to use AI and how to manage it. And then the managers need to come up with a management system. And part of that then is the vendor management step when you bring in services from outside. Does that make sense? Kevin Daisey (19:03) Yeah. Yeah. That's awesome. I haven't heard ISO in a little while. I used to. Michael Paik (19:06) Yeah. Bryan Ilg (19:07) yeah, it's It's back in full force. That's going to be the thing. But, I was, I was going to say just on, specifically procurement, I think it's a, as Michael pointed out, there's a lot of, you know, terms and conditions that need to be evaluated and who owns the liability of, you know, are we materially changing how the tool from the vendor is being used within our workflow? Is something that I think every business needs to be hyper aware of? I assuming liable risk for this HR decisioning because I've changed how that works. And there are class action lawsuits out on some of these topics and you can see some of these risks that play out in real time across different domains. So I'm not sure if they're settled, but they're active and they don't look great from a brand perspective. Procurement is a weak spot, I think, for every organization because they don't know the risks. And I think they live in the terms and conditions and thinking through that kind of ownership of risk is a huge challenge for organizations to get their head around. And I think that's an opportunity for any lawyer. If they really get into the weeds and understand the emerging laws, they can probably add that as an item on the line card of services provided. So much, think that's going to move into the digital realm for lawyers and, you know, AI is, you know, really the catalyst event. And, you know, you get into finance. does future state with code come into law? So, you know, just throwing that out there, but certainly areas to grow into new markets for every industry are being created. And it's just another iteration of, you know, the domain. So it's an interesting time, but throw that one out there. Michael Paik (20:46) That said, AI is just another new thing, like email. We still have product liability. We still have other laws that apply. And that's the domain of the attorneys. The point is that this new age of AI impacts all of this, but it's eminently understandable. It's just that you need to take a little time to understand the technology, how it impacts you in your own work, your organization, and then your clients. Kevin Daisey (20:51) Thank Michael Paik (21:12) and it impacts them differently. And maybe we can just briefly go through them, because I'd like to give as much value as possible on this call. going back to what you said about lawyers making their own workflows, I mentioned that I did that as well. I made no-code apps for myself to manage risk management and compliance. Kevin Daisey (21:19) Sure. Yeah, go ahead. I'm being built for your time. I might as well use it. Michael Paik (21:34) And that's very helpful. mean, but obviously, the things that you need to be aware of is don't upload client information, make sure that things are anonymized. You have to manage it. You have to review the app. But all of that stuff's obvious. But in addition, there's this kind of probabilistic aspect to it. it is not like, first of all, it's definitely not like a Google search, right? Where you look for information and you get it via this web crawl that's indexed. What you're really looking at is a large zip file of the internet about six months ago as kind of trained to talk to you nicely by kind of through reinforcement learning and with human feedback. And this chat function kind of lulls you into believing that you're working with something that has actual intelligence. It's not. It's a token Tumbler. It's a random. a process that has been trained with lots and lots of money and a lot of math so that it sounds right. But the output needs to be reviewed. So when you're making your own apps, if you understand these limitations and you use it for things like transcribing, intake interviews, maybe organizing your office expenses, maybe, you know, doing an analysis of your time sheets over the past month or the quarter, there are things that you should be aware of with regard to the limitations of LLMs when you're doing this. It ain't Excel, right? So there's an aspect to this that should be well understood according to the use case. And then again, if you're making an application for another as a service, you are in a good place if you know the domain, the workflow very well. That's the key, right? That's the context. So what's happening now, as you've probably seen in the media, is that it's no longer necessary to be a coder. The natural language is sufficient. And you can do this graphically with kind of Lego boxes. But the prompting and the context for creating these applications is key. And the benefit of having the domain expertise is that you know your stuff. And so you can see if something's gone awry. And that kind of judgment is something that some 25-year-old coder is not going to provide for you without that domain expertise and oversight. So bringing it all the way back around to the management systems, it's really applying that domain expertise and oversight to the risks of using AI so that you can get Kevin Daisey (23:40) Mm. Hmm. Michael Paik (23:59) to your objective, eyes wide open. Kevin Daisey (24:01) That's, I never heard it explained that way. So I appreciate that. Yeah, I think everyone, a lot of people at this point, especially younger people, but even people around me, employees, my business partner. I think most people truly kind of believe or, you know, take it at its word when it spits out a result, you know? You know, I ChatGPT and this is what I said. So, you know. and I found myself in that. yeah. I found myself in that. like, let Michael Paik (24:22) It's very believable. Bryan Ilg (24:24) Yeah. Kevin Daisey (24:26) me look up this. here. Here's the answer. You don't say, Hey, I think this might be the answer. You say here is the answer. Bryan Ilg (24:27) ⁓ My favorite is how... Yeah, my favorite is how good it makes me feel about myself with my crazy ideas. like that is super smart. You're like, you're the smartest. So, yeah, definitely like that, but. Kevin Daisey (24:41) Well, I could say, you know, what's the number one marketing agency for SEO in the world? it says my company. Well, it has a history of me talking about my company all in my GPT. So it's obviously going to say mine. Bryan Ilg (24:47) Yeah. Sometimes it's true, Kevin. ⁓ Michael Paik (24:51) You Well, Bryan Ilg (24:57) Yeah, yeah, that's the box. Michael Paik (24:57) so for the attorneys in particular, it's very important that you go in and first of all, get the paid version, right? So this stuff's not being trained on. Notwithstanding the case in New York that still has this kind of up in the air with regard to OpenAI, we have the cases on the kind of the West Coast that's on copyright and so on. All of this will eventually settle, but get the paid version. That's a first step. And then you need to customize your instructions. You have to tell it who you are, what you expect, just like you have a new hire. So when you're onboarding a new hire, you let them know what your expectations are. And this includes things like, know, IRAC, issue rule, application conclusion. This is the stuff that I, this is kind of the way I think about things and I already like your output. And depending on your practice, you'll have different kind of constraints and parameters that you want to put on the AI. Kevin Daisey (25:23) 100%. Michael Paik (25:49) But you can do that in custom instructions. And that's not programming. That's just kind of letting it know who you are. And beyond that, you can start making workflows for yourself. So if we start with the individual, you make custom GPTs for yourself that kind of hardware additional instructions for a particular workflow, whether it's client intake or a particular type of monthly newsletter that you send your clients. You kind of set up. a workflow for that via a custom GPT in for OpenAI, for example. You can do it similarly with other applications. My favorite easy one for this is every time you take a CLE, take the transcript, upload it along with the slides into either a custom GPT or for this one, you can use something that's free like Notebook LM. And you have a little mini paralegal that has this material that you can chat with. And even in the case of Notebook LM, that's good to create little podcasts for you regarding that subject matter. So you took a CLE on space law and then you want to get kind of a resource for this. It's like a little mini librarian. ⁓ But those kinds of personal uses require customization and kind of awareness of what's possible. And then that kind of builds out further for the law firm and then for in-house teams at the client. Kevin Daisey (26:50) That's pretty cool. Michael Paik (27:03) But basically you're building little management systems for a particular application, whether it's something as easy as a CLE or a monthly newsletter. That management system starts with that custom instruction and then awareness of dealing with that daemon cat. Kevin Daisey (27:19) I love that. And I know for like my company, we have a paid version of ChatGPT for every employee. So every single person that has it is encouraged to use it unless they're not going to for some reason. But yeah, everyone has it and we encourage them to try to use it within their position to find more performance or efficiencies. And so I know that we do that here. I don't know. Michael Paik (27:39) So you're kind of leaning forward into the technology. But the follow... Kevin Daisey (27:42) Absolutely. Yeah. mean, so we're trying to grow. And so it's, yeah, if you manage this department or you're using ChatGPT, how can you leverage it? how can we become more efficient? and, we want everyone on the team to be thinking that way. If they're passionate about what they're doing, they should be doing that. Michael Paik (27:58) So the governance and management steps would be, do you have a policy in terms of how it can be used? Do you have guidance on what not to use it, what not to do with it? Don't upload certain types of documents. So that's the kind of governance and management system that we're talking about in terms of, if you have nothing, if you have no governance or management system, and say you don't even say anything about AI usage at your company. Kevin Daisey (28:11) To some degree. Michael Paik (28:26) people are still using AI in the course of their employment. And you're still open to these risks because you haven't addressed them. So we call it shadow AI. But addressing AI usage, first of all, and then creating a rule for the organization, and then training people on it, and then whatever in your particular organization you want to do in terms of boundary conditions, that's basically what we're talking about. Kevin Daisey (28:28) What? Michael Paik (28:51) And then when you start to reach out in terms of vendors and third parties, you need to have a system to manage that as well. Bryan Ilg (28:57) Yeah. Kevin Daisey (28:57) That's a good point. mean, I know we do. We've encouraged it. We have some guidelines that we put in place and then we do like team and training. So we have, we have all hands meeting this week. So we have guest speakers on AI. have, I'm giving a talk on AI search and stuff like that. But I'll say we are doing some of that ourselves, but yeah, any law firm listening. What is your team doing? Are they using it? If you don't know they're using it, they're probably using it while they're in office on your machines. I guarantee it. So yeah, is there a policy in place? Do you even know who who's using what and when? And you can still be open to that. That's that's very interesting. So I want to tie this back around to babl and And you know, kind of what if I was a lawyer right now and I'm like, let me go check out Babl. Like, what's that experience like? Kind of what's the process like? What are they getting? Maybe you kind of hit that real quick. Bryan Ilg (29:48) Yeah. So, uh, for our education, so we have really kind of two main, three main product categories. We have AI education, we AI advisory, and then AI audit. Um, so the education, obviously we talked a lot about the governance certificate for legal professionals. It's, it's a four course, uh, certificate program, which is, you know, breaks down just generally how does AI impact business? What are the unique risks that, uh, presented in business, how should we be approaching AI investment as a value framework in there to bring projects into existence. We talk about AI governance frameworks, the NIST AI RMF, the ISO 42001 and EU AI Act and just kind of how those all work. And as Michael pointed out, AI management systems are... incredibly important for not just like getting it off the ground, but for continuous improvement iterations as the technology grows and becomes more impactful in more extremes. And then the fourth is the specific domain legal professionals course. So all of that's capped off with a capstone project and then we issue the certification. So that really helps kind of round out education kind of where our flagship product is. We have a whole AI auditing certificate program, as Michael mentioned, that's kind of how he found Babl a while ago. And that is, you know, all authored by our CEO who was a PhD Astrophysicist and leads our research and all that fun stuff. So, and the keynote in the legal space. So he talks a lot more about risk, the assurance process, MLA and AI generally. I think Michael said that was the hardest one was getting all the mathematical concepts sorted out. But yeah, we have all that. And then we're helping organizations all the time with these problems through direct advisory. And then we're actually out. We have our own AI system model card kind of audit where below the kind of the global AI management system, each individual AI system, can evaluate and inspect the model to make sure that it's achieving what it's supposed to be doing within that probabilistic, what makes sense for the algorithm, right? So I'm the system. So if you work with Babl, you generally are working with, are, you know, just directly with us, we do scoping, every AI has a different context and we bring that into consideration, but it kind of flows into, we're the providing assurance audit or advisory or you're learning yourself and putting yourself in position to, you know, find your lane with AI into the future as it's kind of uncertain. But these are core foundational concepts. The technology is kind of interchangeable in how we approach things. We started with that foundational principles and, you know, these are good just skills and knowledge to possess as the technology grows and changes every week. You know, we've been doing this work for years now. So that's kind of the Babl hard pitch, but yeah, just to all the time. If you are interested in the education, law30, at Babl AI and you're able to get a discount across any of the educational products that we have, including the certificate for legal professionals. Kevin Daisey (32:40) Yeah. Michael Paik (32:41) Yeah. If I could just add one thing about the course for lawyers in particular, you know, we spend a lot of time on management systems for in-house and then also how lawyers can manage their own workflows for personal use and professional use. But, you know, this audience, I'd like to reiterate for law firm, whether you're solo or you're a partner or you're working with other partners in a law firm context, Kevin Daisey (32:54) Love it. Michael Paik (33:19) the finder, minder, grinder functions. They'll get that very clearly. ⁓ But the origination of clients can be done much more easily and effectively using AI, right? Just getting intelligence on potential clients, creating a client file, remembering stuff, simply using Notebook LM for non-confidential information, using Perplexity with Edgar, Kevin Daisey (33:23) Yeah. Michael Paik (33:42) so that you can understand public companies and filings better and all the information that comes with that material contracts, industry analysis, all of that stuff that's a lot easier now with AI. But for the grinder, so the of the operations functions related to the firm, every minute that lawyers are spending on non-billable matters is money out of your pocket. So if you can streamline your operations and get more... Kevin Daisey (34:02) Mm-hmm. Michael Paik (34:06) and better out of the systems that you have, whether they're you yourself doing it or you're working with paralegals and an office manager, enable them to do more and to do it better and more effectively in the same time. I'm not talking about downsizing. I'm just talking about allowing them to do better and be happier. And maybe that comes into compensation as well, because that makes a difference. But for the lawyers who are also billing, all that bullshit time that you're spending on making PowerPoints, presentations for clients, reports, stuff for some presentation at the local bar association, you can whip that stuff up in five minutes, which gives you another 55 minutes to actually make some money. So this is important. And then on practice. This is a little sensitive. for litigation, we've already talked about what to be careful of. But for other practices, there is a lot of work that can be made more efficient. And the question is how to address that, because many law firms are built on a leveraged model. So efficiency is not necessarily the core value of how these business models have developed over time. The world has changed and it's gotten a lot more competitive. Your clients are demanding efficiencies. You can now offer them efficiencies with flat fees or alternative fee arrangements by deploying better technologies in your firm and improving your workflow. So I think that's it from a law firm and provider perspective. Understanding the in-house implications is actually very important also for service provider lawyers because this is how the market has changed. I think if they're not already feeling it, they will feel the pressure to justify billable hours and their work process and why they're not using AI. And the hallucinations and overconfidentiality, that will only get you so far. You need to address this head on. And in order to do that, you will have to understand it better so that you can at least respond to these questions from your. Kevin Daisey (36:04) Oh yeah. Great points. A hundred percent agree. And, um, it sounds like, yeah, if you're a lawyer listening, you're using AI, you know, vendors with AI, hopefully you're using all those things. Um, getting educated and understanding what you have, what your risks are putting in place, just like they just exposed me potentially. You know, we give ChatGPT for every employee and we give them a little bit of, eh, you should probably use it to do this. We don't really have a system in place to manage that and to grow that and to expand on it. So, yeah, it's all very interesting stuff and I appreciate you guys coming on to share really cool company. You're doing great things and I appreciate you sharing with us on here today. Go to babbl.ai. It's B-A-B-L dot A-I and coupon code what 30. Bryan Ilg (36:52) LAW30, Kevin Daisey (36:52) Lord... LAW30. Bryan Ilg (36:54) L-A-W 30, all caps. Kevin Daisey (36:57) Excellent. Michael Paik (36:57) 30 in numbers. Bryan Ilg (36:59) yeah. Three zero. There we go. Yeah. One thing doesn't work. Try another, right? That's the lawyer way. Trial and error. Kevin Daisey (37:01) Good point. LAW30. I think they'll figure it out. These lawyers are smart. They got this. Yeah. So guys, anything else you'd like to add before we wrap up and anything else you want to share with. Michael Paik (37:18) Okay, just on the confidence part, all lawyers, by definition, by training, we're all wordsmiths, right? This is how the AIs, the large language models, I'm not talking about the vision models and diffusion models, the large language models work by word association. You are very much like the large language model. then, know, the interaction should be natural and a good fit. for the way we have been trained. But we just need to keep our eyes wide open and treat the models and the tools as very, very capable and smart paralegals or associates that are very bright but have no clue. And every morning when they come to work, they forget everything else that they learned yesterday. So you need to be aware of those limitations. But there's a lot you can get done with these tools to save you time, make you money and maybe just get you home. Kevin Daisey (38:11) like that. Well, I appreciate it. Bryan, anything else you got? Bryan Ilg (38:12) They're good. I was just going to say I appreciate you having us on. We'd love to, you know, if there's stuff that we talked about that you want to talk more to us, we're always open for a conversation. You can find us contact information there too. And happy to talk about things. We do offer partnership programs. I'm a channel guy. So if there's competencies you want to bring in, you know, we're happy to discuss. We do have legal. partnerships with data privacy folks or things along those lines to help bring in some of our services. So, if that's interesting at all, we'd love to talk more and help you grow your business. I mean, that's what we're here to do. Kevin Daisey (38:54) I appreciate it. I love it. That's what the podcast is all about growing your businesses and I appreciate you guys coming on the show and sharing all this today. So check them out also LinkedIn. I these guys are both on LinkedIn. I'm always on LinkedIn. If you want a direct connection, let me know if not look them up and connect with them there too. So I encourage that. Well, everyone, thank you so much for tuning into another show. As always, I appreciate your loyalty to the show and Bryan Ilg (38:56) Yeah. Kevin Daisey (39:17) Guys, thank you so much. We'll talk to you soon.
About The Host: Kevin Daisey
Kevin Daisey is both the co-founder and Chief Marketing Officer of Array Digital, with a legacy in the digital marketplace spanning over two decades. Kevin’s extensive experience in website design and digital marketing makes him a valuable strategic partner for law firms. He doesn’t just create digital presences; he develops online growth strategies that help law firms establish and lead in their respective fields.
The Managing Partners Newsletter
If you like The Managing Partners Podcast then you’ll love The Managing Partners Newsletter.
Every week we’ll email you the latest podcast episodes, legal and business books we recommend, some news, and something to make you smile.