AI Policy
Introduction
Generative AI is a transformational technology which is, will be, an increasingly useful and valuable tool for most professionals in the years to come. The responsible use of AI in our work will enhance our ability to serve our members and advance the public interest.
However, the responsible use of AI must always be subject to human judgment and oversight to avoid bias, misuse, and inadvertent risks of harm. This technology is rapidly evolving, and many agencies, professionals, and organizations are exploring the potential of generative AI, current issues, and longer-term implications.
The Alliance for Orthopedic Executives (AAOE) is committed to harnessing the power of Artificial Intelligence (AI) to advance the AAOE mission while upholding ethical standards and prioritizing responsibility. This AI policy outlines guidelines for the use of AI to perform work and emphasizes the importance of individual author responsibility in all AI-generated outputs.
Ethical AI Framework
AAOE will adhere to a well-defined ethical framework for AI development, deployment, and usage. This framework includes the following principles:
Transparency
AAOE will strive to be transparent about the use of AI in our work. AAOE will disclose it when AI is used to generate most textual output (images are exempt) with the statement: “Note: Generative AI was used in part to generate this content.” at the end of the content. Do not include this statement on emails or marketing copy that are AI-generated.
Tracking
Because the AI landscape is constantly changing, particularly with regard to copyright and legal liability, we will track the use of AI to generate output. Please log your use in this spreadsheet.
Fairness and Equity
The author has the responsibility of reviewing for bias, hallucinations (the ability for artificial intelligence to “make up” “facts”), and discrimination, and they will actively monitor and rectify any unintended biases in the AI-generated outputs.
- Do not use generative AI to create or spread deepfakes, nor misinformation or disinformation.
- Beware of biases incorporated in AI-generated output, both in writing and in developing imagery for a campaign. Do not use generative AI as a replacement for diverse experiences, insight, or engagement. Do not use generative AI tools to create imagery, likenesses, or avatars that create the appearance of diversity instead of working with diverse talent.
Privacy and Data Security
AAOE will prioritize data privacy and security, ensuring that AI models are trained on anonymized and consented data, and access to sensitive information is strictly controlled.
Where and when possible, the security of data used in a generative AI tool will be evaluated by the Chief Marketing & Membership Officer, and a recommended tool list compiled (see this document, below) to minimize security/privacy risk.
The following types of information should never be uploaded to an AI tool:
- Sensitive demographic information such as age, race, and gender
- Personally identifiable information (PII)
- Budgets
- Financials
- Board/executive committee minutes or notes
- Contracts/legal documents
- Personnel-related documents (including resumes)
- Generally, any information that is protected by law/regulation or where loss of confidentiality could have a significant adverse impact on our mission, safety, finances, or reputation.
Many of the paid generative AI tools don’t capture data you input for their large language models, so when in doubt, use a paid tool, anonymize your association name, or don’t use AI. Things you want to be cautious about entering into a generative AI tool include these items:- Focus group notes
- Member/vendor interview notes
- Membership engagement data
- Competitive advantages of your association
Ask for Permission
We will also protect the integrity of our members’ and vendors’ information. When in doubt – ask your supervisor whether you have permission to use member- or vendor-provided content in a generative or other AI tool. This can occur through our regular speaker agreement. Copyright and other issues may preclude us from using AI tools. This includes transforming conference or webinar sessions into blog posts, creating white papers from subject matter experts, and similar content. After permission is obtained and a generative AI tool used to create new content from an education session or other member-provided content, a member of the AAOE professional team should reach out to that member to review the final output. The accountability and author responsibility sections below address the reason for taking this action.
Copyright
If the work product, such as a logo, needs to be copyrighted, don’t use an AI tool to generate it. At this point, it is not protectable by copyright. Do not use an AI tool to generate this content, whether text or image.
Accountability
We will hold individuals accountable for decisions made based on AI-generated outputs and ensure that the responsibility for any consequences remains with the author of the work.
Author Responsibility
- AAOE recognizes that while AI can significantly enhance productivity and decision-making, it is ultimately a tool that requires human oversight and responsibility. Therefore, the following guidelines for author responsibility will be strictly adhered to:
- Human-in-the-Loop Approach: AI-generated outputs will always require human review and approval before implementation, especially in high-stakes situations or tasks with potential societal impact.
- Fact-check AI output: AI tools can generate falsehoods (also called hallucinations) fairly easily, so check the output. If the content fed into the AI tool comes from a member, vendor, or other subject matter expert, it’s important that that person reviews the output to check specifically for hallucinations.
- Proofread: AI tools will not follow the AAOE writing style guide, so be sure that our naming conventions and other writing guidelines are followed.
- Decisive Authority: Authors retain the final authority over AI-generated outputs and will not absolve their accountability by blaming AI algorithms for unintended outcomes. Eg. If there are AI-generated conclusion(s), then it is up to the author(s) to verify that the conclusion(s) are true.
- Training and Awareness: Authors and employees using AI in their work will receive training on AI ethics, biases, and limitations, to make informed and responsible decisions.
- Regular Audits: AAOE may conduct audits and evaluations of AI systems to assess their performance, fairness, and overall impact, and ensure compliance with our ethical framework.
- Translation
Do not rely on generative AI tools to translate or transcreate (adapting a message from one language to another) documents into other languages. The quality of the transcreation or translation might not be accurate. The quality of the transcreation or translation might not be accurate.
Data Usage and Ownership
- AAOE will strictly adhere to data usage and ownership guidelines:
- Data Responsibility: Data used for training AI models will be obtained legally and responsibly, with proper consent and protection of user privacy.
- Open Source and Collaboration: Whenever possible, we will contribute AI models and tools to the open-source community to encourage transparency, collaboration, and accountability within the means of our resources.
- Collaboration and External Partnerships: AAOE recognizes the significance of collaboration with external partners in the AI space.
When collaborating with external organizations:
- Shared Values: We will partner with entities that share our commitment to ethical AI usage and responsible decision-making.
- Clear Agreements: All partnerships will have explicit agreements that outline data usage, responsibilities, intended use, and shared ethical principles.
Continuous Improvement
Our nonprofit is committed to continuous improvement in our AI policies and practices:
• Feedback Mechanism: We will establish a feedback mechanism to receive inputs from stakeholders on AI-related practices and concerns.
• Adaptive Governance: The AI policy will be periodically reviewed and updated to align with the latest advancements, regulations, and ethical standards.
Approved Tools
- Otter.ai for taking notes during non-legally protected meetings (i.e., not for board, executive committee, or Council meetings).
- Zoom’s AI meeting companion for taking notes during non-legally protected meetings (i.e., not for board, executive committee, or Council meetings).
- Fathom Notetaker
- Jasper.ai and the paid (licensed) version of ChatGPT for writing general marketing content. This includes blog posts, social posts, emails, etc.
- Scribe for creating standard operating procedures.
- Predis.ai for creating and posting social media posts.
- Loom for creating videos.
- Fliki for creating videos from images.
If you’d like to try other tools, great! Let’s chat about the tool and make sure it’s safe, licensed, transparent, and will be effective in meeting our needs. Contact your manager to discuss.
Conclusion
By adopting this AI policy, our nonprofit aims to leverage AI's potential while ensuring that the ultimate responsibility lies with the human authors. We commit to conducting our AI-based work ethically, transparently, and responsibly, fostering a positive impact on society.
If you have questions or concerns, please reach out to your manager or director for guidance.