Digital & Data

Artificial Intelligence (AI) & Large Language Model (LLM) Governance & Security Testing

J.S. Held Examines Multifaceted, Global Business Impacts of Tariff and Trade Policies

Read More close Created with Sketch.

As artificial intelligence (AI) and large learning models (LLMs) become more prevalent in global business, we help organizations address accompanying governance and security risks.

We rank evolving risks and develop plans to address, identify emerging AI opportunities, implement sound protocols and governance programs, and provide ongoing assessments, training, and support.

In the absence of formal AI compliance regulations, companies must define and adhere to best practices specific to their respective industries. Key considerations such as data integrity, bias prevention, and cybersecurity are central to these efforts.

To address these concerns, our Artificial Intelligence and Machine Learning (AI/ML) governance experts leverage decades of experience providing industry-leading advisory in security penetration testing, data governance, and privacy. As the landscape of AI risk continues to evolve, we deliver specialized expertise related to AI frameworks, including litigation support, pressure testing existing models, and development and implementation of AI/ML policies and procedures.

Our Services
  • AI/LLM Compliance and Security Testing & Assessments
  • AI/LLM Training, Development & Optimization
  • AI/ML Policy & Procedure Development
  • Data Attribution Analysis
  • Data Security Risk Identification & Mitigation
  • Ethical Risk Identification
  • Expert Witness & Litigation Support
  • Governance & Compliance Consulting
  • Regulatory Compliance Advisory
Comprehensive Solutions to Measure & Remediate Risk

Today’s organizations typically have compliance programs in place to address risk areas such as growing data volumes, cyber security, data privacy, and regulatory requirements. However, recent technological breakthroughs in artificial intelligence and large language models present a new set of challenges in data governance. Adopting LLMs involves training on large datasets and repositories of data, among numerous other data governance and compliance considerations that are often not factored into an organization’s existing programs.

As the landscape of risk continuously evolves, our experts stay ahead of the latest emerging compliance and regulatory requirements facing organizations across industries. AI/LLMs present multiple layers of technical challenges. Specific to governance & security related services, our approach combines innovative and traditional methods to address various risk factors facing AI/LLM frameworks.

AI / ML Governance, Compliance & Expert Advisory

The increased prevalence of AI within organizations has sparked a significant rise in litigation and emerging compliance requirements across industries. Our approach combines innovative and traditional methods to address various issues related to AI and Language Model (LM) integration, training, and implementation in the market.

Our expertise includes:

Advisory Services
We provide expert consultation to help clients optimize their existing data assets, AI integrations, and LLM agents. We specialize in data attribution analysis and tracking data sources to ensure compliance with intellectual property (IP) and regulatory standards. We leverage our depth of experience to design, implement, monetize, and manage robust AI/LLM frameworks while maintaining transparency and mitigating legal or ethical risks.

Defensive Assignments
We help organizations identify and mitigate risks to data security and IP. This includes detecting unauthorized access attempts by web crawlers, malicious bots, or unauthorized internal activities and assessing the effectiveness of current data governance measures to enhance overall posture and protect sensitive data from potential risk.

Offensive Assignments
We proactively evaluate internal AI and LLM frameworks to uncover potential vulnerabilities and assess compliance with data privacy and intellectual property regulations. This includes identifying improper data ingestion, misuse of copyrighted material, and any other security or ethical risks. These assessments ensure client systems are secure and compliant with industry standards.

We leverage multidisciplinary expertise across the firm in data security and governance, litigation support, and regulatory compliance to effectively gather relevant information and provide actionable insight to clients. We utilize industry-leading tools and technology to drive efficiency as we analyze, advise, and report on findings.

Further, experts from Ocean Tomo, a part of J.S. Held, utilize decades of intellectual property experience to complement the work performed by the AI/ML Governance team by delivering guidance regarding IP valuation and damages issues.

Security Assessment Measures

For scenarios involving security assessments, development of an LLM system requires meticulous planning and execution divided into seven critical phases that address specific aspects of the system’s security posture.

From the initial enumeration of the system’s components and capabilities to the final reporting and remediation of vulnerabilities, these phases collectively form a comprehensive framework to fortify the LLM against various risk factors. We meticulously navigate these phases to ensure a robust defense mechanism that identifies and mitigates current vulnerabilities as well as adapts to emerging threats to safeguard the integrity and reliability of the LLM system:

  • Phase 1 – Enumeration
    Catalog the components and capabilities of the LLM system through the identification of interfaces, data flows, and integration.
     
  • Phase 2Identification of Vulnerability Points
    Map out potential vulnerabilities specific to LLMs (i.e., data privacy issues, model exploitation points, and system misconfigurations) by using the Open Worldwide Application Security Project (OWASP) Top 10 for LLMs as a guide for vulnerabilities.
     
  • Phase 3 – Enhanced Testing Configuration
    Set up specialized testing environments and scenarios, including configuring tools and selecting techniques for enhanced testing (i.e., adversarial simulations).
     
  • Phase 4 – Automated & Manual Testing
    Conduct a series of automated vulnerability scans tailored to AI/LLM systems, coupling this with manual tests for deeper expert analysis.
     
  • Phase 5 – J.S. Held Proprietary Testing
    Execute proprietary tests that uncover deep-seated and sophisticated vulnerabilities.
     
  • Phase 6 – Training Loop for Continuous Improvement
    Employ a continuous feedback mechanism to refine the LLM system and enhance its security and risk postures.
     
  • Phase 7 – Reporting & Remediation
    Document all findings from the various testing phases, categorize issues based on severity, provide actionable remediation strategies, and deliver expert impact analysis.
Scalable Testing Framework

Our experts utilize the Open Worldwide Application Security Project (OWASP) Top 10 Web Application Security Risks for LLMs as the basis of our testing framework, augmented based on our collective experience across the firm testing LLMs in real-world environments.

Below are the 13 testing categories (scored on a scale spanning from informational to low/medium/high to critical) from which we build our test cases during an engagement:

  • T1 – Prompt Injection Testing
  • T2 – Insecure Output Handling
  • T3 – Training Data Poisoning
  • T4 – Model Denial of Service
  • T5 – Supply Chain Vulnerabilities
  • T6 – Sensitive Information Disclosure
  • T7 – Insecure Plugin Design
  • T8 – Excessive Agency
  • T9 – Overreliance on LLMs
  • T10 – Model Theft
  • C1 – Data Access & Segregation Testing (DAST)
  • C2 – Adversarial Resilience & Recovery (ARR)
  • C3 – Dynamic Input Stress Testing (DIST)
Streamlining Complex Investigations with Generative AI

Despite the increased volume and growing complexity of investigations, legal and accounting teams are being asked to do more with less. Increasingly, parties involved in such matters are turning to Generative Artificial Intelligence (GenAI) as a solution due to its ability to analyze vast amounts of data in a more efficient manner, leading to reductions in time and money spent. 

In this article, J.S. Held investigative and forensic technology experts Natalie Lewis, CPA/CFF, CFE, and Mike Gaudet discuss how GenAI can streamline complex investigations.

Topics covered include: 

  • Practical uses of GenAI to save time and money from experience on actual investigations
  • How GenAI insights enhance interview preparation
  • Improving expert reports using GenAI
  • Integrating GenAI with other techniques as a complementary tool in fraud investigations

> To read the article, click here.

Related Insights

Our insights cover a variety of topics impacting businesses, society, the economy, and the environment. Check out our latest white papers, research reports, educational seminars, industry speaking engagements, and perspective articles.

 
INDUSTRY INSIGHTS
Keep up with the latest research and announcements from our team.
Our Experts