Skip to main content

Questionnaire Branch Map

Dev-only view showing all 68 questions across 10 sections. 32 conditional questions are highlighted with their trigger conditions.

Conditional (answer-based)blueIndustry filteramberAnswer dependencyAlways shown
Define the main objectives of your organization in defining acceptable generative AI use.
Conditional:firmSize != Solo Law Firm | Small Law Firm | Other Law Firm
1has-legal-departmentyes-no-na

Does your organization have a legal department or retain outside counsel?

This helps us tailor legal-specific questions in later sections to your organization.

2policy-objectivescheckbox-listrequired

What are the main objectives of the firm in defining acceptable generative AI use? For example (not intended to be an exhaustive list):

obj-researchConducting accurate research?
obj-timesavingFinding time-saving workflows?
obj-hallucinationAvoiding hallucination risks?
obj-privacyProtecting data and privacy?
obj-ipProtecting intellectual property?
obj-otherOther (please specify)
Conditional:policy-objectives includes obj-other
3policy-objectives-othertext-area

Please specify the other objective(s):

4policy-approachsingle-selectrequired

To achieve these objectives, does the firm intend to approach generative AI with a general ban or to define allowable generative AI use for work purposes?

Examples of allowable use: First-draft report or brief generation, Boilerplate generation (e.g., trusts, contracts), Email drafting, Image generation (e.g., stock photos for presentations), Audio transcription from online meetings, Legal research on case law

approach-banGeneral ban on generative AI use for work purposes Prohibit all generative AI tools except where explicitly permitted
approach-defineDefine allowable generative AI use for work purposes Identify and approve specific use cases and tools
not-sure-approach-defineNot sure — defaulting to "Define allowable use" for now You can revisit this later. Defaulting to the more inclusive option so related questions are not skipped.
5policy-target-datedate-pickerrequired

What is the target creation or effective date for this policy?

Note: This date will appear on the questionnaire results that will help you draft your Policy. You can update it later.

Every policy needs an owner. This section identifies who maintains and updates this AI governance policy, how often it's reviewed, and how exceptions are handled.
Conditional:firmSize != Solo Law Firm | Small Business (Owner Only) | Small Business
1policy-ownertext-inputrequired

What individual or team is responsible for maintaining this policy?

2policy-review-frequencysingle-selectrequired

How often will this policy be reviewed and updated?

Note: This policy was created on {{policy-target-date}} and will be updated at least {{self-label}}. We recommend beginning the review and refresh process well in advance of the target review date.

review-quarterlyQuarterly
review-biannuallyTwice per year
review-annuallyAnnually
review-as-neededAs needed based on technology changes
3policy-exception-processtext-arearequired

What will the process be for reviewing and approving exceptions to the policy (for example, to review potential new vendors or to temporarily authorize specific employees to use generative AI tools for a specific task)?

Before diving into specifics, establish your organization's overall posture on AI: will you embrace it with guardrails, use it cautiously, or restrict it? This foundational decision shapes every section that follows.
1ai-philosophysingle-selectrequired

Does this firm want employees to emphasize using generative AI workflows as often as possible? Or does this firm follow the principle that tasks should only be performed with AI if there is already a net time savings compared to non-AI processes after both performing the task and reviewing the output?

philosophy-nativeAI-Native: Emphasize employees learning to use generative AI for workflows as often as possible, even if it might be slower going in the short-term, to become more familiar with AI tools. Some employees may share "vibecoded" solutions with others.
philosophy-focusedAI-Focused: Have certain employees experiment with generative AI beyond regular work duties, share findings and best practices with broader team
philosophy-assistedAI-Assisted: Use AI only when there is already a net time savings after task completion and review
philosophy-cautiousAI-Cautious: Use AI for some limited administrative or repetitive tasks.
philosophy-againstFirmly Against: Minimize or prohibit AI use except where absolutely necessary
2third-party-transcription-consentsingle-selectrequired

Would this firm consent to or deny consent to third-party use of GenAI transcription tools, e.g., on a Zoom call?

consent-allowConsent to third-party GenAI transcription
consent-denyDeny consent to third-party GenAI transcription
consent-case-by-caseCase-by-case basis depending on meeting content
Conditional:is-legal-context = true
3other-jurisdictionsyes-no-na

Are any attorneys at this firm admitted to practice law in jurisdictions other than your primary state (e.g., member of the Bar, pro hac vice)?

Different jurisdictions may have their own rules, standing orders, or judicial policies regarding the use of AI in legal practice. Identifying all applicable jurisdictions helps ensure compliance.

Conditional:other-jurisdictions = yes
4jurisdiction-local-rules-checkedyes-no-na

For each jurisdiction where attorneys are admitted, have you checked for local rules, standing orders, or judicial policies regarding the use of AI in legal filings or practice?

Note: Several jurisdictions have issued standing orders or local rules requiring disclosure of AI use in legal filings. These requirements vary by jurisdiction and may change frequently. We recommend reviewing this with the consultant for each jurisdiction.

Conditional:is-legal-context = true
5illinois-ai-policy-acknowledgedyes-no-na

Illinois-based law firms: have you discussed the Illinois Supreme Court Policy on Artificial Intelligence (January 2025) with the consultant yet?

Note: The Illinois Supreme Court issued a policy on AI use in January 2025 that may affect your governance requirements. We recommend discussing this during your consultation.

Conditional:is-legal-context = true
6csr-vs-ai-depositionsingle-select

Would this firm as a matter of policy object to another firm's use of AI transcription rather than a certified shorthand reporters (CSRs) to produce transcripts of a deposition?

Note: Some AI deposition tools are marketed to attorneys as having real-time cross-examination of witnesses against evidence and other testimony as the deposition transcript is being generated. Compare to CSR professional ethics on neutrality.

csr-objectYes, object to AI transcription for depositions
csr-no-objectNo objection to AI transcription for depositions
csr-case-by-caseCase-by-case basis
Conditional:is-legal-context = true
7ai-disclosure-to-clientssingle-selectrequired

Will this firm disclose its use of AI to clients as a general matter?

Note: Some jurisdictions and bar associations are moving toward requiring disclosure of AI use in legal work. Proactive disclosure can build client trust and reduce risk.

disclose-yesYes, disclose AI use to clients by default
disclose-case-by-caseCase-by-case based on engagement type
disclose-noNo, do not disclose unless required
Conditional:is-legal-context = true
8client-chatbot-advice-prospectivetext-area

How will the firm advise prospective clients about the use of AI chatbots in preparation for conversations with their attorney or other legal matters?

Note: Clients increasingly use ChatGPT and similar tools to research legal questions before consulting an attorney. Their AI-generated understanding may be inaccurate and could affect the attorney-client relationship.

Conditional:is-legal-context = true
9client-prior-chatbot-usetext-area

How will the firm advise prospective clients about prior use of AI chatbots related to the legal matter before engaging your firm?

Note: Clients may have shared privileged or sensitive information with AI tools before retaining counsel. This could affect privilege, create discoverable records, or introduce inaccurate assumptions about their legal position.

10tools-presumptionsingle-selectrequired

Is the use of generative AI tools or features presumptively allowed for work purposes, or only when explicitly allowed?

e.g., your firm uses a calendar program that later adds an AI "secretary agent" that can take meeting minutes and schedule follow-up call: may employees use these features without explicit permission?

presumption-allowedPresumptively allowed unless specifically prohibited
presumption-prohibitedOnly allowed when explicitly approved
Conditional:tools-presumption = presumption-prohibited
11dedicated-tools-prohibitedsingle-selectrequired

Are all dedicated generative AI tools (such as ChatGPT) presumptively prohibited if they are not specifically approved?

dedicated-prohibitedYes, all dedicated AI tools are presumptively prohibited
dedicated-case-by-caseEvaluated on a case-by-case basis
dedicated-allowedNo, dedicated AI tools are generally permitted
Conditional:tools-presumption = presumption-prohibited
12websites-blockedsingle-selectrequired

Will the websites of unapproved AI tools be blocked (e.g., "chatgpt[.]com")?

blocked-yesYes, block unapproved AI tool websites
blocked-partialBlock some high-risk sites only
blocked-noNo, rely on policy compliance
not-sure-blocked-yesNot sure — defaulting to "Yes, block" for now You can revisit this later.
Conditional:websites-blocked = blocked-yes | blocked-partial | not-sure-blocked-yes
13websites-blocked-responsibletext-area

Who will be responsible for implementing and maintaining website blocks for unapproved AI tools (e.g., DNS filtering, firewall rules, browser policies)?

Your employees and contractors are the front line of AI policy compliance. This section covers training requirements, what to do when unapproved tools are discovered, and consequences for policy violations.
Conditional:policy-approach = approach-define | not-sure-approach-define
1training-level-allowedsingle-select

(If allowed) What level of training should employees receive before being authorized to use generative AI for work purposes?

This firm allows trained employee use of generative artificial intelligence (GenAI) tools for work purposes on work devices. Approved GenAI tools and GenAI features should be used on work devices. No work-related activity should be conducted on personal or other non-work devices using unapproved GenAI tools.

training-basicBasic awareness training (1-2 hours)
training-intermediateIntermediate training with hands-on exercises (half day)
training-comprehensiveComprehensive certification program (full day or more)
training-role-specificRole-specific training tailored to job function
Conditional:policy-approach = approach-ban
2training-level-bannedsingle-select

(If banned) What level of training should employees receive to avoid generative AI-related risks if generative AI tools are generally not permitted for work purposes?

This firm does not allow trained employee use of generative artificial intelligence (GenAI) tools for work purposes on work devices. GenAI tools and GenAI features should not be used on work devices. No work-related activity should be conducted on personal or other non-work devices using unapproved GenAI tools.

training-awarenessBasic awareness of why AI is prohibited
training-risksTraining on AI risks and how to identify AI-generated content
training-comprehensive-banComprehensive training on risks, identification, and reporting
3prohibited-tool-actiontext-arearequired

When an employee becomes aware of the availability of a prohibited or unapproved GenAI tool on a work device, what actions must the employee take?

4new-feature-actiontext-arearequired

When an employee becomes aware of the availability of a prohibited or unapproved new GenAI feature in an existing software tool, what actions must the employee take?

5consequences-work-devicetext-arearequired

What are the consequences for an employee using unapproved generative AI tools on a work device for work purposes?

6consequences-personal-devicetext-arearequired

What are the consequences for an employee using unapproved generative AI tools on a personal device for work purposes?

Conditional:industry includes Technology | Professional Services | Federal Contractor
7contractor-programmers-policytext-area

To what extent will the firm apply these policies to its contractors using large language models for computer programming tasks? Will the firm expect the same policies, or focus on top risk categories, such as the use of "agents" in high-risk "dangerous" or "YOLO" modes?

Conditional:is-legal-context = true
8csr-hiring-policysingle-select

Court Reporters—Internal Use. Does this firm hire only certified shorthand reporters (CSRs) to produce transcripts for legal proceedings?

csr-onlyYes, CSRs only
csr-preferredCSRs preferred but not required
csr-ai-allowedAI transcription services are permitted
not-sure-csr-preferredNot sure — defaulting to "CSRs preferred" for now You can revisit this later.
Conditional:csr-hiring-policy = csr-only | csr-preferred | not-sure-csr-preferred
9csr-due-diligenceyes-no-na

Does this firm's due diligence when selecting CSRs include demonstrated knowledge of the risks of generative AI misuse?

Conditional:is-legal-context = true
10csr-ai-use-policysingle-select

To what extent will the firm apply these policies to its CSR contractors using generative AI models to aid in transcription?

Will the firm expect the CSR to refrain from all generative AI use (e.g., an AI transcript inserting an entire sentence without CSR review based on analysis of the raw audio file) or would low-risk uses be permissible, such as accepting a suggested spelling of medical terminology?

csr-no-aiRefrain from all generative AI use
csr-low-riskLow-risk uses permissible (e.g., spelling suggestions)
csr-case-by-caseCase-by-case approval required
Audit your organization's current and potential AI tool usage. This inventory helps identify both approved tools and shadow AI that may need to be addressed.
1undisableable-featuressingle-selectrequired

If unapproved GenAI features in existing software cannot be removed or disabled, must employees refrain from using these features?

e.g., "employees shall not rewrite text in MacOS Pages using the Apple Intelligence feature"

refrain-yesYes, employees must refrain from using these features
refrain-case-by-caseCase-by-case evaluation needed
refrain-noNo, incidental use is acceptable
2tools-inventory-introinfo-display

The following inventory will help identify which AI tools and features your employees may be using. This includes both dedicated AI tools and AI features embedded in common software.

For each tool, indicate: (1) if employees currently use it for work, (2) if it is available on work devices, and (3) if it is available on personal devices used for work.

Note: This inventory is an iterative process and does not need to be completed entirely in one session. Your progress is saved automatically, and you can return to update it at any time during the consultation.

Identify external sources of AI-generated content that may affect your organization.
1external-transcription-awarenesscheckbox-listrequired

Which external sources of LLM transcription might your organization encounter?

Note: See Recommendation 2 regarding ground truth documents.

ext-bodycamLaw enforcement body camera-to-police report tools (e.g., Axon Draft One)
ext-depositionAI deposition tools
ext-meetingMeeting transcription tools from external parties
2external-content-awarenesscheckbox-listrequired

Which external sources of AI-generated content should employees be trained to be familiar with?

Note: identification of AI-generated content is not always possible, so familiarity with where it may appear is the goal.

ext-ai-overviewScreenshots of Google AI Overview and other AI-generated summaries without attribution
ext-ai-imagesAI-generated images, e.g., an AI-modified image of a suspect in a law enforcement announcement
ext-fake-papersFake academic papers and preprints
ext-fake-imagesFake academic stock images (charts, maps, diagrams), including in paid image databases
ext-voice-deepfakesVoice deepfakes (AI-cloned audio impersonating known individuals)
ext-video-deepfakesVideo deepfakes (AI-generated or AI-altered video of real people)
ext-sextortion-deepfakesSextortion / nudification deepfakes (AI-generated explicit imagery used for extortion)
ext-blurry-image-enhanceGenAI enhancement of blurry or low-resolution real images (hallucinated details presented as real)
ext-property-deepfakesGenAI modification of property images (homes, vehicles, etc.) used in listings or insurance claims
ext-refund-fraudRefund fraud through deepfakes (AI-fabricated purchase evidence or product return imagery)
ext-work-invoice-fraudWork and invoicing fraud through deepfakes (AI-fabricated proof of completed work or services)
ext-legal-slopClient-prepared "legal slop" — AI chatbot-drafted documents, letters, or legal arguments submitted to your organization
Conditional:is-legal-context = true
3expert-witness-ai-usesingle-select

Has this firm considered the risk that an expert witness may have used generative AI in preparing their report, analysis, or testimony?

Expert witnesses may use AI tools to draft reports, analyze data, or generate visualizations without disclosure. AI-generated content in expert testimony could introduce hallucinated facts, unverifiable analysis, or biased conclusions that may not withstand cross-examination.

Note: Consider requiring expert witnesses to disclose any use of generative AI tools in their engagement agreements.

expert-policy-existsYes, we have a policy addressing expert witness AI use
expert-policy-plannedNot yet, but we plan to address this
expert-policy-noneNo, we have not considered this
4third-party-mitigationtext-area

What processes will your organization implement to verify the authenticity of externally-sourced content?

Large language models (like ChatGPT, Claude, and Gemini) are the most visible form of generative AI. This section addresses which LLM tools are approved, how they should be used, and specific high-risk use cases like hiring.
1llm-latest-enforcementsingle-select

Does this firm enforce the use of the latest LLM within a particular tool?

Note: Choice of LLM model may be determined by your AI software provider rather than being a user-configurable option.

latest-enforcedYes, always use the latest model
latest-recommendedRecommended but not enforced
latest-flexibleFlexible based on use case
Conditional:llm-latest-enforcement != latest-enforced
2llm-older-model-reasonscheckbox-list

Are there reasons why an older model would be used?

Examples: GPT-5, GPT-4o, GPT-4b micro (medical research), Claude Opus 4.1, Claude Sonnet 4.5, Meta Llama 4, Meta Code Llama, Meta Llama Guard

Note: Choice of LLM model may be determined by your AI software provider rather than being a user-configurable option.

older-costCost considerations
older-compatibilityCompatibility with a tested workflow
older-stabilityStability/reliability concerns with newer models
older-complianceCompliance or regulatory requirements
3chatbot-permittedsingle-selectrequired

Does this firm permit employee use of any LLM-enabled chat interfaces for work purposes on work devices?

chatbot-yesYes, specific approved chatbots only
chatbot-noNo, chatbots are not permitted
not-sure-chatbot-yesNot sure — defaulting to "Yes" for now Defaulting to the more inclusive option so safeguard questions are not skipped.
Conditional:chatbot-permitted = chatbot-yes | not-sure-chatbot-yes
4chatbot-approved-listtext-area

Which chatbots are approved?

Reminder: Approved chatbot(s) should be used only by employees who have received training on GenAI risks such as hallucinations, data privacy, bias, prompt injection, and sycophancy.

Conditional:chatbot-permitted = chatbot-yes | not-sure-chatbot-yes
5chatbot-reminders-acknowledgedcheckbox-list

Acknowledge the following best practices for chatbot use:

reminder-sensitiveSensitive data should only be entered if the approved chatbot offers additional information security features (e.g., specialty legal tool, HIPAA compliance, on-premises "on-prem" inference, local LLMs, or dedicated servers)
reminder-maskedMasked data should be used when using general-purpose LLMs
reminder-memoryChatGPT "Memory" features and similar should be disabled
reminder-trainingOpt out of all any permissions allowing chats to be used for AI model training purposes
reminder-linksEnsure that when you share links for citations, they are to external webpages, rather than to the chat itself, which may compromise confidentiality
6llm-writing-permittedsingle-select

Does this firm permit trained employee use of approved LLM-enabled writing aids?

Reminder: Employees must review all AI-generated text and are ultimately responsible for its accuracy.

writing-yesYes, with required human review
writing-limitedLimited use cases only
writing-noNo, AI writing aids are not permitted
Conditional:is-legal-context = true
7ediscovery-permittedsingle-select

Does this firm permit employees to use general purpose or specialty LLM-enabled tools for the purposes of analyzing and classifying eDiscovery documents as relevant or irrelevant?

ediscovery-yesYes, with appropriate safeguards
ediscovery-noNo, eDiscovery classification must be human-only
not-sure-ediscovery-yesNot sure — defaulting to "Yes" for now Defaulting to the more inclusive option so safeguard questions are not skipped.
Conditional:ediscovery-permitted = ediscovery-yes | not-sure-ediscovery-yes
8ediscovery-quality-teststext-area

What tests do you have to score the quality of output?

Conditional:ediscovery-permitted = ediscovery-yes | not-sure-ediscovery-yes
9ediscovery-batchingtext-area

What batching methods would be used to deal with context rot?

Conditional:ediscovery-permitted = ediscovery-yes | not-sure-ediscovery-yes
10ediscovery-securitytext-area

What security measures are in place to identify potential indirect prompt injection influencing the classification?

11hiring-llm-prohibitedsingle-selectrequired

Does this firm/company prohibit employee use of all general purpose or specialty LLM-enabled hiring tools for the purposes of analyzing resumes or screening and selecting from a pool of job applicants for interviews and hiring decisions?

Note: Recommendation 4: Not recommended due to unexplainable biases and prompt injection risks.

hiring-prohibitedYes, LLM hiring tools are prohibited (Recommended)
hiring-allowedNo, LLM hiring tools are permitted with safeguards
AI features are increasingly embedded in websites, search engines, email clients, and coding tools. This section addresses which of these embedded AI capabilities your organization will permit.
1website-ai-designsingle-selectrequired

May an approved generative AI web design tool or coding platform be used to create and modify the company website?

Examples: Wix AI, Squarespace AI, WordPress AI blocks, Framer AI

Reminder: Avoid sharing sensitive data.

website-ai-yesYes, approved AI tools may be used
website-ai-noNo, manual design only
2website-ai-searchsingle-selectrequired

Will this firm (or its contractors) be permitted to add generative AI search features to the firm's website?

Examples: Algolia AI, custom ChatGPT widget, Intercom AI

website-search-yesYes
website-search-noNo
website-search-evaluateTo be evaluated
3website-scrapingsingle-selectrequired

Will this firm allow for generative AI tools to scrape the firm's website?

Note: Add instructions to robots.txt prohibiting AI bots from scraping and training. This may limit visibility on generative AI search. For CloudFlare users, this option may be toggled on/off.

scraping-allowYes, allow AI scraping
scraping-denyNo, block AI scraping via robots.txt
4search-engine-aisingle-selectrequired

Does this firm permit trained employees to use LLM-enabled search engine features for work purposes on work devices, noting that summarization may not be faithful to the underlying citations and requires review of sources?

Examples: Google AI Overviews, Bing Chat/Copilot, Perplexity

search-ai-yesYes, with required source verification
search-ai-noNo, use traditional search only
5website-embedded-searchsingle-selectrequired

Does this firm/company permit trained employee use of LLM-enabled search tools embedded within particular websites, noting that summarization may not be faithful to the underlying citations and requires review of sources?

Examples: AI search bars on vendor sites, knowledge bases with AI Q&A

embedded-yesYes, with required source verification
embedded-noNo
6specialty-searchsingle-selectrequired

Does this firm/company permit trained employee use of LLM-enabled search and summarization functions embedded within specialized research tools (e.g., Google Scholar or LexisNexis)?

Examples: Google Scholar AI summaries, Westlaw Edge AI, LexisNexis+ AI, Casetext CoCounsel

Note: Summarization may not be faithful to the underlying citations and requires review of sources.

specialty-yesYes, with required source verification
specialty-noNo
7coding-environmentssingle-selectrequired

Does this firm/company prohibit all LLM-enabled coding environments, except for specifically approved tools?

This includes, but is not limited to: GitHub Copilot, Claude Code, ChatGPT Codex, Google Antigravity, Microsoft Copilot, Meta Code Llama.

coding-prohibitedYes, all AI coding tools are prohibited except approved ones
coding-allowedNo, AI coding tools are generally permitted
coding-naN/A - no programming activities
8local-llmssingle-selectrequired

Is downloading and running approved "local" LLMs on work devices permitted?

Examples: Ollama, LM Studio, GPT4All, llama.cpp

Note: Employees using local LLMs for work purposes should acknowledge that these smaller LLMs involve a tradeoff between privacy and performance; local LLMs may run on a laptop but will not have the same accuracy as a frontier LLM.

local-yesYes, approved local LLMs are permitted
local-noNo, local LLMs are not permitted
9email-assistants-worksingle-selectrequired

Email Assistants on Work Devices: Does this firm permit LLM-enabled email summarization and personal assistant "agents" for work purposes or on work devices?

Note: This use case is considered high-risk due to information security research indicating the potential for data exfiltration. It is recommended that this firm prohibit all employee use.

email-work-prohibitedProhibited (Recommended)
email-work-allowedAllowed with restrictions
10email-assistants-personalsingle-selectrequired

Email Assistants on Personal Devices with Work Email: Does this firm permit employee access to personal devices used to access work email?

Note: If so, it may be difficult to ensure that employees do not use LLM-enabled email summarization and personal assistant "agents." Due to the risk of data exfiltration, it is recommended that employees not access work email from any personal device that has LLM features enabled.

email-personal-prohibitedPersonal devices with LLM features should not access work email
email-personal-allowedAllowed with training on risks
email-personal-no-accessNo personal device access to work email permitted
11scheduling-assistantssingle-selectrequired

Scheduling Assistants: Does this firm permit AI scheduling assistant "agents" that can create calendar events without a human-in-the-loop?

Examples: x.ai, Reclaim.ai, Clockwise AI, Motion

Note: AI scheduling assistant "agents" that can create calendar events without a human-in-the-loop, especially if documents can be attached to those events, carry similar risks to email agents noted above.

scheduling-prohibitedProhibited (Recommended)
scheduling-allowedAllowed with restrictions
AI meeting transcription is one of the most common — and most contentious — AI use cases. This section covers your policy for internal and external meetings, consent requirements, and approved tools.
1internal-transcription-permittedsingle-selectrequired

Does this firm permit LLM-enabled meeting transcription tools internally?

internal-yesYes, with review process
internal-noNo, human transcription only
not-sure-internal-yesNot sure — defaulting to "Yes" for now Defaulting to the more inclusive option so review process questions are not skipped.
Conditional:internal-transcription-permitted = internal-yes | not-sure-internal-yes
2internal-review-processtext-area

If so, they may only be used as the first draft: what is the review process to finalize minutes?

Note: For meetings requiring official minutes, it is recommended that a designated individual take minutes. AI transcripts may be used to aid in drafting the minutes when details need to be clarified, but contemporaneous notes should be taken and used as the basis of official minutes. AI transcripts may hallucinate details or mistake which individual was the speaker.

Conditional:internal-transcription-permitted = internal-yes | not-sure-internal-yes
3internal-transcription-toolstext-area

What tools are permitted for internal meeting transcription?

4external-transcription-policysingle-select

When attending meetings with external parties, shall employees request that official meeting minutes be taken by an attendee?

external-request-humanYes, request human note-taking
external-flexibleFlexible based on meeting sensitivity
external-no-requestNo specific request required
Conditional:third-party-transcription-consent != deny-consent
5external-consent-conditionstext-area

If consent for AI transcription is granted, what are the conditions of proper use of AI transcription and in what situations would AI transcription not be permissible due to concerns about sensitive data (e.g., risks to IP)?

AI-enabled wearables (such as Meta Ray-Ban smart glasses and Humane AI Pin) and always-on listening devices can passively capture audio and video in your facilities. This section addresses your policy for these devices to protect legal confidentiality, intellectual property, and trade secrets.
1facility-recording-policysingle-selectrequired

Does this organization prohibit AI wearables (e.g., Meta Ray-Ban smart glasses, Humane AI Pin) and always-on listening/recording devices in its facilities?

Note: AI wearables can passively record conversations, capture images of documents, and transmit data to cloud services — often without visible indicators. In environments where attorney-client privilege, trade secrets, HIPAA-protected information, or other confidential material is discussed, these devices pose significant risk.

recording-prohibitedProhibited in all facility areas
recording-common-areas-onlyAllowed in common areas only (lobbies, break rooms)
recording-allowed-with-restrictionsAllowed with restrictions
no-policy-yetNo policy yet — need to establish one
2visitor-recording-notificationsingle-selectrequired

Are visitors, clients, and third parties notified of the organization's recording device policy before entering facilities?

notify-checkinYes, at check-in
notify-signageYes, via signage
notify-agreementsYes, in engagement or visitor agreements
no-notificationNo notification currently
Conditional:facility-recording-policy != no-policy-yet
3recording-enforcementcheckbox-list

How is the recording device policy communicated and enforced?

enforce-signagePhysical signage at entry points
enforce-handbookEmployee handbook or policy manual
enforce-visitor-checkinVisitor check-in procedure
enforce-client-agreementsClient engagement agreements
enforce-contractor-agreementsContractor and vendor agreements
enforce-noneNo formal enforcement yet
Conditional:facility-recording-policy != no-policy-yet
4recording-exceptionstext-area

Is there a process for granting exceptions to the recording device policy (e.g., accessibility accommodations, approved research use)? If so, describe it.