Interview with Robert Bergman, CEO of NextLevel Mediation

By Leslie King O’Neal



What is NextLevel Mediation?
Artificial Intelligence (“AI”) dominates the news. Almost daily new AI tools and applications are announced. But, the ADR community wonders how AI tools can benefit mediators and lawyers in dispute resolution. In this interview, Robert Bergman, CEO of “NextLevel Mediation,” (“NLM”) explains how AI tools and decision science can assist lawyers, mediators and their clients in decision making and risk analysis, helping them resolve disputes more efficiently and cost effectively.[i]
Robert Bergman (“RB”): NLM is a cloud-based software system composed of a set of integrated tools for various stages of mediation and arbitration. These include AI tools to assist in client priority analysis, designing questionnaires, analyzing questionnaire results, document research, online negotiation, and a Risk Analysis Tool to create a visual representation of potential decision outcomes.
Who Can Access the AI Tools?
Only the neutral can access the AI, Decision Science and Game Theoretic tools. The neutral is always ensuring the tools are used responsibly. It is a browser-based system that works on any device (laptop, mobile phone, etc.). All document types (Excel, PDF, Word) can be loaded into the system for AI document Research analysis. Lawyers and mediators can use these tools to help clients understand and manage risk and assess the probability and severity of potential risks. This helps clients make informed decisions regarding settlement.
How Do the Tools Improve Mediation?
Robert Bergman: Each discipline contributes to NLM by enhancing decision-making, understanding strategic interactions, and augmenting human mediator’s skills.
Decision Science[ii] uses quantitative techniques to inform individual and group decision-making. It encompasses decision analysis, risk analysis, cost-benefit analysis, simulation modeling, and more. Its goal is providing a framework for understanding personal priorities and improving decision-making within groups and organizations. The underlying assumption is that dispute resolution is a multi-criteria decision between two or more parties with different or opposing agendas. NLM Uses Decision Science through methodologies like the Analytic Hierarchy Process (AHP) which breaks down complex decisions into smaller parts, allowing parties to make pairwise comparisons, thus getting a clear rationale for their decision. This is particularly useful in resolving disputes involving multiple criteria and parties.
Game Theory[iii] is a theoretical framework for conceiving social situations among competing players. It models strategic interactions where each participant’s outcome depends on the actions of all involved.[iv] NLM Uses Game Theory based on research in Fair Division. NLM uses the Adjusted Winner Procedure, which two NYU professors invented in the mid-1990s. It’s particularly useful in fair division of assets or issues between two parties.
Artificial Intelligence (AI)i[v] involves development of computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and language translation. NLM integrates AI to augment mediators’ skills. AI helps develop questionnaires, summarizes results, and suggests negotiation ideas based on disputants’ prioritized objectives. Importantly, AI supports the process without taking direct control, ensuring human mediators retain authority over the proceedings. This integration helps maintain ethical standards, privacy, and confidentiality.
How Did NLM Develop Its Decision Science, Game Theory and AI Tools?
Robert Bergman: NLM’s team consists of decision scientists, computer scientists and an attorney experienced in dispute resolution. The NLM software tools were developed in a long-term project spanning seven years. Real-world challenges in dispute resolution, negotiation, and legal practice drove NLM’s development process. It focused on building a distributed decision support system integrating decision science methods, game theory, and multi-criteria decision-making.
How Does NLM Keep Documents and Analytics Secure and Confidential?
Robert Bergman: NLM ensures the security and confidentiality of documents and analytics through several robust measures:
- Encryption: All information and data are encrypted both at rest and in transit. Data stored on Microsoft Azure servers is encrypted using AES 256 encryption, preventing unauthorized access. Also, data transmitted across networks is secured by HTTPS Transport-level encryption using TLS.
- Role-Based Access Control (RBAC): Access to sensitive data is restricted based on the principle of least privilege. Users are granted access only to the information necessary for their role. For example, a client or attorney cannot access the information of the opposing party, while the mediator can access both sides.
- Attack Countermeasures: To prevent unauthorized access, NLM requires strong passwords, uses security access tokens with finite lifetimes, and limits login attempts to mitigate the risk of brute force attacks.
- Data Redundancy and Backup: Using Microsoft Azure cloud services, data is redundantly stored and backed up to ensure it remains available and secure even if a hardware failure occurs. This includes locally redundant storage (LRS) and zone-redundant storage (ZRS).
- Privacy and Security Focused Development: The software was developed with high ethical standards[v] to protect parties’ privacy and confidentiality. AI technology only augments mediator skills and is not directly accessible to parties.
How Does NLM Protect Confidentiality with a Large Language Model and Open AI?
- NLM’s relationship agreements with OpenAI and Microsoft ensure data confidentiality through a combination of industry-leading security practices and clear data usage policies[i]. Here’s how confidentiality is maintained:
- a. User Data Not Retained
- OpenAI does not retain user-specific data unless the user explicitly enables it. For NLM, this ensures that sensitive data shared while using the tools is not stored or used after the session.
- b. Data Encryption
- All data transmitted between NLM and OpenAI’s servers is encrypted using HTTPS/TLS, ensuring secure data transfer and preventing interception by third parties.
- c. Access Control
- OpenAI systems use strict access control mechanisms. Only authorized personnel have access to the infrastructure, ensuring no unauthorized access to mediation data.
- d. User-Specific Data Handling
- OpenAI does not use any personal data shared during interactions for model training unless the user explicitly permits it. For NLM, this ensures maintaining confidentiality of negotiation details, decisions, and client interactions.
- e. ISO Compliance and Security Audits
- Open AI follows industry-standard security practices and is in the process of ISO 27001 certification, ensuring robust security controls are in place to protect data privacy and confidentiality. By combining strong encryption, access control, and strict data usage policies, OpenAI ensures that mediation platforms like NLM can trust that their sensitive data remains confidential.
Takeaways
AI, game theory and decision science tools can augment mediators’ and lawyers’ skills and assist clients in assessing risk and making settlement decisions. NLM’s platform provides tools to assist in improving ADR outcomes. Lawyers and mediators, as well as their clients, can benefit from learning to use these types of tools (and others) in their practices.
[i] DISCLAIMER: This post provides information about “NextLevel Mediation” https://nextlevelmediation.com/ but it is not an endorsement of this or any other product. The authors urge readers to research any tools independently before using them in their practices. Future posts will discuss other technology tools available for ADR practitioners.
[ii] For more information about “Decision Science” see, Breakthroughs in Decision Science and Risk Analysis, (Louis Cox, Ed.) (Wiley 2015)
[iii] For more information about Game Theory, see, An Introduction to Game Theory, Martin J. Osborne (Oxford Univ. Press 2004).
[iv] For more information about Artificial Intelligence, see Artificial Intelligence—A Modern Approach, (4th Edition), Stuart Russell and Peter Norvig (Pearson 2021)
[v] OpenAI offers enterprise-grade plans where enhanced data privacy and security measures are provided. In these cases, OpenAI does not retain or use data for training its models. Organizations can sign specific data-processing agreements that meet corporate or regulatory requirements, ensuring data is handled in compliance with privacy laws.