Rimon

Navigating the Legal Landscape of AI and ChatGPT – AI Time Journal

Insights July 19, 2023

Rimon Partner John Isaza, Esq., FAI was interviewed by AI Time Journal to share his knowledge on AI in the legal field. The full article, written by Flor Laorga, is below.

We thank John Isaza, Esq., FAI, partner at Rimon Law, who shared his story and valuable insights on various aspects, including the evolving legal landscape, the delicate balance between privacy protection and innovation, and the distinct legal implications that arise when integrating AI tools.

John provides valuable perspectives on the challenges and considerations related to AI technologies like ChatGPT, emphasizing the significance of data governance, ethical usage, and compliance with privacy laws. Furthermore, he shares his firsthand experience in launching Virgo, a cloud-based software solution designed to address the specific needs of privacy and information governance.

Exploring the Intersection of Law, Privacy, and Innovation: Insights and Strategies

What sparked your interest in the intersection of law and privacy/information governance?

I have always been drawn to uncharted territories to keep things interesting. I got my start in IG back in 2001 when records and information management was primarily focused on paper records practices. But, I had a boss that encouraged me to focus on electronic records and data, which he saw as the wave of the future. So, I decided to become an expert in all things electronic. This led me to various leadership positions, including Chair of the Information Governance Professional Certification board. This Board was tasked with overseeing the transition of the records management industry into the broader information governance, which includes privacy amongst other key disciplines like data security, e-discovery, systems architecture, infrastructure, and traditional records management.

How do you stay updated with the latest developments and changes in privacy laws and regulations?

This is no small task. Trade organizations like ARMA, the ABA, and the IAPP are great resources to track the latest developments. As Chair of the ABA’s Consumer Privacy and Data Analytics Subcommittee, I also have the benefit of tapping into the talents and experience of various legal professionals who are keenly interested in the topic. We collaborate often on publications and speaking engagements that force us to stay on top of the latest developments.

How do you approach the balance between privacy protection and enabling innovation in the digital age?

This is where my experience as an entrepreneur is most helpful. We have to balance strict and sometimes draconian regulatory measures against the realities of keeping the lights on and turning a profit. As a legal counselor, my job is to point out to clients their options and the consequences associated with each option. Ultimately, for clients, the issue of privacy compliance comes down to a risk-based decision, such as whether and how large the target might be on their back based on their offering in the market. 

What motivated you to launch your cloud-based software, Virgo, and how does it address the needs of privacy and information governance?

Virgo informs global legal requirements for not only records retention but also privacy regulations, format, location, disposition, and statute of limitations requirements. These regulations are then mapped to the records of the organization in what we call “big buckets’ that would be assigned specified retention periods informed by the mapped regulations that apply to a given bucket, in addition to best practices considerations.

On the whole, Virgo manages the records retention schedule of the organization, which is the first line of defense not only for e-discovery but also to justify retention in the face of privacy requests to delete or the general privacy mandate to dispose of when no longer needed. 

I co-founded Virgo when it became too unwieldy to manage the hundreds of thousands of regulations in this space while trying to map them to the records of each organization. Interestingly, we managed to stay competitive against the likes of global law firms like Baker & McKenzie by leveraging translation tools that were the precursors to modern AI tools. Our research was not only better but at a fraction of the price that huge law firms might charge a client.

Privacy and data protection became increasingly important with bigger companies subscribing to our software such as Boeing, Microsoft, or NASA. Each came with strict data security compliance requirements which forced us to adopt the strictest security standards and thereby made it easier to sell the software across the board. The first few were extremely painful, but it got much easier thereafter. Once you are compliant with the high watermark requirements, it makes it much easier to navigate local or regional requirements. 

The legal landscape is already starting to take shape, led by the European Union with its proposed AI Act. The AI Act lays out a good starting regulatory framework that foreshadows where other countries might go next in seeking to harness and put guardrails around the usage of AI. The reality is that AI providers will need to get used to navigating possibly conflicting regulatory mandates, which will lead to a risk-based approach similar to what I just described regarding privacy compliance.

First, let’s distinguish between public AI tools and private ones. The public AI tools (such as ChatGPT, Google’s Bard, Microsoft’s Bing, or Dall-E) pose the biggest challenges when it comes to the integrity of the data, as the sample data could be drawn from unvetted data mined over the years from the public internet. This leads to concerns such as not only the validity of the results but also whether it presents copyright, trademark, and other legal liability issues. Public AI tools also present serious confidentiality challenges that organizations need to nip in the bud right away via policies and training to stop employees from entering private or confidential information into public AI tools that essentially ingest any data entered for keeps and for anyone in the world to see.

The challenges for private AI tools lie primarily with the usage of clean and accurate training data sets so as to avoid the old “garbage in, garbage out” dilemma.

In both instances, the tools need to be tested by humans to vet for biases that could lead to defamation or discrimination by the algorithm.

At present, there are not many regulatory frameworks, other than the EU AI Act, which is still going through the approval process. New York City also has a law in place, but as you can guess much more is yet to come at the state and even federal U.S. level.

For the time being, I would pay close attention to the EU AI Act, which as I mentioned earlier seems to have a good starting framework to help at least set priorities for which AI usages are considered highly sensitive and therefore subject to tighter scrutiny.

Simply by looking at the EU AI Act, one can quickly discern the usages that will get the closest scrutiny. For instance, so-called “high-risk” AI system applications include critical infrastructure that could endanger a citizen’s life or health, a person’s access to education or career path that could influence educational or occupational training, robot-assisted surgery, employment recruitment, credit scores, criminal evidence evaluation, immigration, asylum or border control determinations, and application of the law to a certain set of facts.

The AI Act also enumerates “limited risk” and “minimal risk” examples, in addition to “unacceptable risk” systems that would be banned such as those that would exploit human behavior. The devil will be in the details when it comes to enforcement, but as I have mentioned this is the start of a framework for regulatory enforcement and therefore guidance.

In terms of data governance, what best practices do you recommend for organizations that are leveraging AI technologies to ensure compliance with data privacy laws and regulations?

Here is a checklist of what I have been recommending to organizations:

  • Track international laws aimed at putting controls around AI usage
  • Stay vigilant for errors in the data, and usage of protected IP, especially images and audiovisual
  • Include anti-bias language obligations in any generative AI contract
  • Contractually obligate all vendors not to use AI without human fact-checking
  • Obtain commitments and contracts with assurances about the legitimacy of training data use
  • Users of AI need to be careful that the output of AI does not show biases to trigger discrimination laws
  • Use Explainable AI (XAI) to understand assumptions
  • Heed usage of AI for employment decisions, credit valuation, medical resources, incarceration in particular
  • Generative AI models need to be monitored at both the training stage and the development of outputs

In terms of internal usage, I also recommend:

  • Assess the current usage of AI within your organization
  • Determine the highest and best use of AI in your organization
  • Training to remind staff and vendors not to use sensitive data with external/public AI tools
  • Create guardrails through policies around the use of AI, and revise existing policies that could intersect with AI
  • Review vendor agreements that could involve AI use
  • Assess changes to products or services or business models that could benefit from AI usage

The best advice I would give here is to make sure there is a human review of all input and output, especially if the output will be used for critical functions of the organization or to publish to the outside world. Be especially careful if the algorithms will be used to make hiring, promotion, pay increase, or termination decisions. Likewise, credit scoring, appraisals, or other potential usages that could impact a person financially should be vetted with extra care.

How can businesses ensure that they are ethically and legally using AI-powered tools like ChatGPT while respecting user privacy and data protection laws?

In this space, as with previous hot technologies like email, social media, instant messaging, and texting, organizations need to put guardrails in place via policies, procedures, and training on the subject of employee usage. In terms of developing AI applications, policies, procedures and guidelines also need to be implemented to assure data hygiene at the input level and vetting of results at the output level.

The usage of ChatGPT is an example of the usage of a public tool, which means that any data fed into it will go public. Therefore, the biggest legal or ethical concern with the usage of a public AI tool is the potential loss of confidentiality or trade secrets. If a public AI tool is incorporated into the business, be on the lookout for the protection of trade secrets and confidentiality.

Looking Ahead

I anticipate a flood of regulations in this space, so for the time being stay on top of every single proposed regulation or issued guideline. This will inform the hot spots, and whether your business model is likely to have a target on its back based on what is being regulated. Along those lines, pay close attention to what federal agencies like the Federal Trade Commission are saying or informing on the topic.

John Isaza Esq. FAI is one of the country’s foremost experts on privacy, information management, electronic discovery, and legal holds. He has developed privacy, information governance, and records retention programs for some of the most highly regulated Fortune 100 companies, including related regulatory research opinions. His clients range from the Fortune 100 to startups, the latter of which he has served also as outside General Counsel…Read more