EP 241 Effective Use of AI and Decision Science in Mediation with Robert Bergman

In Episode 241, Robert Bergman of NextLevelMediation.com joined with Mac Pierre-Louis of MacPierreLouis.com and Thomas G. Giglione of WeAgreeMediators.com on how Next Level Mediation can assist neutrals to use AI to help parties in conflict find the best solutions for their conflicts. As a leader in the field of AI and conflict resolution, and with 60 years of experience, Robert shares answers to these important questions:
What are some benefits of using AI to help mediators and parties in resolving disputes?
What are some challenges or risks of using AI in mediation? How can they be addressed or avoided?
What is the paradox of AI regulation? Why is it hard to make rules for AI?
What are some examples of how AI can be used for good or evil? How can we make sure that AI is used ethically and responsibly?
How has AI changed the role of mediators and neutrals in conflict resolution?
Do you think AI can replace human mediators and neutrals completely? Why or why not? What are some skills or qualities that human mediators and neutrals have that AI does not?

YouTube player

As a bonus, Robert recorded his thoughts found in his article What is the Post Babel Era, and discussed the role of social media and AI in escalating modern conflicts, with social media enabling reputation damage and bias reinforcement, and AI exacerbating the situation through the creation of deepfakes and imitation of online presence. Full readable transcript of this video below links on the bottom.

YouTube player

Relevant Links:

Implementing AI in Mediation Article: https://nextlevelmediation.com/publications/implementing-ai

Paradox of AI Regulation Article: https://nextlevelmediation.com/publications/paradox-of-ai-regulation

Will AI Replace Mediators and Neutrals Article: https://nextlevelmediation.com/publications/will-ai-replace-mediators-and-neutrals

CODR International Council for Online Dispute Resolution 

YOUTUBE VIDEO Member Meeting with Colin Rule February 2023: Bob Bergman of NextLevelMediation.com – YouTube

YOUTUBE ICODR Member Meeting July 2023 With Colin Rule : Support Bot design with Bob Bergman of NextLevelMediation


Some Topics Discussed were in Effective Use of AI and Decision Science in Mediation:

Effective Use of AI in Mediation

The use of AI in mediation can enhance efficiency by processing and analyzing large amounts of data quickly, identifying patterns and solutions that may not be obvious to humans. It can assist mediators in generating questionnaires, understanding people’s priorities, and suggesting creative solutions to disputes. Challenges include ethical concerns such as confidentiality, privacy, emotional complexity, and dependency on technology.

AI’s Impact on Mediation and Neutrals

AI can make processes more efficient and improve job performance, but it cannot replace human qualities like empathy and ethical judgment. The historical technophobia in the legal profession has led to a lag in adopting technology, but platforms like Next Level Mediation offer support and training for integration. Integrating AI tools requires taking time to understand them and investing effort in learning how to use them effectively.

Use of the NextLevelMediation.com Platform

The platform utilizes large language models for generating questionnaires, analyzing answers, understanding client priorities, and facilitating online negotiation. It offers free classes on system usage and decision science with a 30-day trial period for users to explore its features. Users can access the platform globally but may encounter limitations in language support for navigation menus.

Paradoxes and Regulation of AI

The paradoxes surrounding the regulation of AI involve balancing over-regulation versus innovation while fostering global agreement amidst political deglobalization. Elon Musk’s concern about open source versus for-profit involvement by companies like Microsoft raises questions about responsible use and ethics regarding national security applications.

Implementing AI in Mediation: Benefits and Risks

The benefits include increased efficiency, reduced time and cost of mediation, handling more disputes effectively, assisting mediators with skills augmentation. Risks involve ethical concerns related to confidentiality, privacy issues misuse of sensitive data; the system is designed with a human always in the loop to address these risks.

Considerations for Jurisdictional Information

The Next Level Mediation platform does not handle jurisdictional legal information due to the complexity of varying laws across different jurisdictions. It incorporates risk analysis to help users make rational decisions based on their priorities, settlement options, and litigation risks.


TRANSCRIPT of The Weaponization of Social Media and its Impact on Conflict

00:00 :
Hi Robert. So you wrote an article couple years back on your website nextlevelmediation.com and it’s titled Dispute Resolution the post Babel Era. It’s a pretty interesting take on what’s going on in our society, especially because of social media and how people are interconnecting and what how it produces conflict. So how has exactly how has social media weaponized the Internet, and to what degree does that contribute to conflict in our modern time?

00:36 :
Well, as you might recall, Facebook in 2009 invented the concept of likes and Twitter. Well, it used to be called Twitter, now called X, invented retweets. And so what that allowed people to do was point to anyone on the internet and essentially ruin their reputation by making comments about the other things they’ve said, things they’re doing, and so forth. And unfortunately there was no recourse for anyone so that started this kind of conflict with peopl. The other problem is that it’s kind of made people want confirmation bias, so meaning that when they post things they want to see the number of likes. If they don’t get the number of likes, they get depressed and this is one of the bad sideeffects of social media, which is causing essentially more conflict. I think the other problem is that it’s reinforced the concept of multiple identities so people no longer have one identity. They have work identities, religious identities. They have half a dozen gender identities, born or found. They have virtual identities, political identities, avatars. And now people are trying to brand themselves. Part of this problem is that before people just use their own sense of critical thinking to think about a dispute. Now they ask a social media influencer who probably has no knowledge of what the real dispute is about. And this unfortunately creates more tension and makes conflicts a lot more difficult to resolve.

02:39 :
Yeah Thomas, do you have anything to add on that?

02:41 :
How does AI make that situation worse?

02:46 :
Ok, So what AI does in this case, it allows people, for instance, create things like deepfakes which impersonate other people, things they may or may not have said. It also allows them; AI to mimic people’s emails, their websites, their brands, and that can lead to more conflict as well so that’s kind of the bad side of AI and how it really can lead not only to local and national conflicts, but to worldwide conflicts. And I think that’s a real risk.

Lawyer, mediator, arbitrator, practicing family law but passionate about helping people resolve their conflicts and disputes through mediation. MacpierreLouis.com