Autopilot on Copilot: Governance Considerations and Pitfalls When Implementing Generative Artificial Intelligence

Tori Anderson and Tracy Bordignon
Author: Tori Anderson and Tracy Bordignon
Date Published: 1 November 2024
Read Time: 6 minutes

The adoption of generative artificial intelligence within enterprises has created an entirely new branch of governance and compliance risk. It has also highlighted the need to revisit and bolster traditional information governance frameworks. Many large organizations are still in the process of establishing robust information governance frameworks for their current environments. Now, they must also address questions about their readiness to manage the impact of Copilot1 and similar generative AI tools. These questions include whether they can uphold appropriate access, use, and management across their IT infrastructure. Additionally, organizations should assess whether new artifacts are being created that could introduce unforeseen regulatory risk. This includes new forms of information that may require disclosure to regulatory authorities under existing obligations.

Many large organizations are still in the process of establishing robust information governance frameworks for their current environments. Now, they must also address questions about their readiness to handle the impact of Copilot and similar generative AI tools.

While asking the necessary questions may expose vulnerabilities at the outset, it is essential to question and test often. Governance-related questions must be part of the foundation of generative AI test plans, proof-of-concept evaluations, and pilot initiatives. Ultimately, the answers to these questions offer significant insights including identifying business use cases and technology applicability, understanding risk so that it can be properly mitigated and managed, and developing defensibility documentation and modifications to information governance and AI governance processes. By leveraging these insights, organizations can safely and effectively integrate generative AI tools into their data governance processes.

Monitoring

The use of AI tools should be monitored and managed across an organization to catch misuse or violation of varying regulatory obligations. These initiatives are more successful with cooperation from IT, compliance, legal, and organizational users. Such stakeholders are tasked with ensuring compliance and preventing employees from inappropriately utilizing prompts or accessing sensitive or restricted company information when engaging with AI tools.

Policies and workflows monitor for inappropriate or non-compliant activity, establish permissions for who can access certain categories of information, and underpin an organization’s data retention and disposal rules. Unfortunately, even when these controls, which are critical to effectively managing a range of legal, regulatory, and organizational risk, are in place, they are not always readily applied to new AI deployments, including Copilot within Microsoft 365 environments. In some cases, policies will need to be created or re-configured to apply to generative AI interactions and activity.

Access Controls

The issue of access control within Microsoft 365 is not a new concept, and information governance professionals have advocated for well-managed access permissions within SharePoint and other aspects of the Microsoft 365 environment for many years.2 However, it is especially pertinent within Copilot deployments, and left unchecked, can create significant risk on numerous fronts.

With Copilot, anything a user has permission to access may surface as part of a response to a query or prompt. Without Copilot, when users are over-permissioned and have access to documents that they should not, they would only uncover the document if actively searching for it. Therefore, excess permissions and failure to limit access to certain materials can potentially expose information to far more employees than intended. To manage this, organizations must be diligent in defining controls and thoroughly understand the range of materials that Copilot users can access at different permission levels.

Notably, when Copilot is turned on for a user, every application within Microsoft 365 that has a Copilot element will have AI activated. Administrators and users are not able to select which applications may use Copilot and which may not. For example, a user cannot turn off Copilot for a specific product, but there are options to limit certain functionality and features through administrative settings. An example of this is a Teams administrator updating meeting transcription settings so that Copilot cannot be used during Teams meetings.

Therefore, every application in the tenant must be checked for access controls and evaluated for different types of information risk. For example, in Copilot for Microsoft 365 chat, Copilot works across applications to respond to user queries about upcoming meetings, related emails, and items it thinks may require follow up. Users can point Copilot to Word documents or PowerPoint files to answer questions or generate content, which may prompt the system to scan accessible files in SharePoint, OneDrive, and Outlook.

Key Considerations

Given the constant cycle of change within the Microsoft 365 environment, frequent auditing across these applications and controls is essential to maintain adherence to governance rules over time. In addition to regular monitoring of permissions, organizations should take and document the following steps to strengthen governance when implementing new AI:

Proof-of-concept evaluation—Before going live with any generative AI tool, legal and IT teams should work closely together to conduct a contained pilot with a small group of test users. This will help reveal risk and governance gaps that may be unique to the organization before the system is rolled out en masse.

AI governance readiness assessment—This step involves reviewing existing access control management across all the systems within the environment that Copilot (or another generative AI tool) may have access to. The good news is that control testing to date has shown that Copilot appears to remain aligned with established access controls and is accurate in only accessing documents and data that an individual has permission to view. So, rigorous assessment of permissions can pay off in limiting the risk of access control failings for Copilot users. 

Establish an AI committee—A team of active stakeholders is essential to set the policy, move it forward in an informed way, and keep it current when features and functionality change within the Microsoft 365 environment. AI committees cannot be vanity committees. They must be comprised of people who understand legal, regulatory, technical, and organizational needs, and how those may be impacted by AI use.

Labeling policies—Defining a labeling system for documents and information categories that must be treated with varying levels of confidentiality or protection is an effective way to support governance in an environment using Copilot. This will help ensure that sensitive materials are eliminated from the AI system, so they are not inadvertently shared outside the groups that have clearance to view them. 

Continuous evaluation—Cloud systems and AI technology are advancing at light speed. Functionality and controls change constantly, so organizations need an AI governance program that is built for adaptability. Part of maintaining flexibility is understanding that even after the initial assessment of strengths and weaknesses in access controls and other aspects of governance, and even after the proof of concept is completed, the program cannot go into autopilot. System owners must be vigilant and continually retest to confirm whether the controls are holding up over time and whether anything within the system creates new or unexpected risk.

Ongoing training—Developing and implementing an actively engaging training plan is the linchpin for successful deployments and governance. All employees must understand their organization’s legacy and AI governance policies, practices, and procedures. Similarly, everyone must understand and acknowledge the appropriate use of any new AI tools. Additionally, training will help convey unique organizational and departmental use cases discovered during piloting and continuous evaluation, to ensure that employees responsibly maximize Copilot’s value. 

Conclusion

When Copilot first became available, many organizations experienced the excitement and pressure to be early adopters. There will continue to be a drive to adopt quickly as other tools and new AI features enter the market. Innovation is important, but it cannot move at the expense of effective risk management. Organizations, especially those that are in highly regulated industries, must take time to test their use cases and allow IT to align with other stakeholders. This will intensify as a critical compliance issue as regulatory authorities scrutinize how organizations are using AI, and as AI use proliferates across enterprise systems that house sensitive and confidential information. Organizations should aim for a middle-ground approach wherein they embrace AI while also establishing controls and verifying that the tools are working as they should.

Endnotes

1 Microsoft Copilot
2 One Identity, “Identities and Security in 2022

Tori Anderson

Is a director within FTI Technology, with nearly 10 years of experience working in ediscovery, information management, and governance. Anderson holds a law degree from the University of Miami (Florida, USA) and is licensed to practice law in Florida and Washington DC, USA.

Tracy Bordignon

Is a senior director within FTI Technology, with more than a decade of experience in information governance and privacy and helping organizations manage legal risk. Bordignon holds a law degree from Southwestern Law School (California, USA) and is licensed to practice law in Florida, USA.

Additional resources