AI Security

We believe that companies integrating AI intro products should implement adequate security controls and inform their users on how they ensure a safe use of AI. This is why we detailed below our AI security programme, answering the excellent questions suggested by SafeBase.io to facilitate any security review. This programme is meant to evolve rapidly in 2024.

Scope #

This AI security section covers exclusively the ISMS Policy Generator. If you’re here for the ISO 27001 Copilot (chatbot assistant), then find out here all the ISO 27001 Copilot Trust Center, displaying all the AI risk management and security measures implemented specifically for this product.

We believe trust and safety documentation should be specific to the systems it covers. We hope you appreciate.

Short Version – AI Security Measures at ISMS Policy Generator #

ISMS Policy Generator is committed to the following principes when using AI for their services. The scope of this policy is mainly the generators that deliver your policies. We’re not covering, for example, the fact that our live chat can use AI on our support documentation to provide you with an answer.

  • Transparency on how AI is integrated in your services. Our service relies on AI for generating your policies, utilizing OpenAI’s API and MistralAI. These technologies are fundamental to our service, enabling the creation of customized information security policies. Currently, by default, our generators call OpenAI API.
  • Inherent AI Usage: The nature of our service integrates AI as an essential feature, and customers engage with it as a key part of the policy generation process. Opting out of AI features is not feasible due to their integral role (there wouldn’t be policies otherwise). We hope you appreciate our transparency on this aspect.
  • Data Handling with AI: Customer data is used strictly for generating policies, without contributing to the training of AI models (yes, our AI providers say they do not train their models on your inputs and outputs through their API). Both OpenAI and MistralAI APIs process data securely, ensuring it’s only applied for the intended documentation.
  • AI Output Transparency: Our generators are instructed to turn your inputs into well structured policy paragraphs, aligned with ISO 27001. That’s the the task they’ve been assigned, and they will only do this.
  • Robust Data Security: Strict privacy rules ensure user data confidentiality, supported by AES 256 encryption of your data at rest, and stringent data security controls through all the chain (which is what matters). We treat your information as confidential by default.
  • Privacy. Our data protection controls, along with minimization of personal data collection, align with GDPR and other data protection standards, safeguarding user information throughout the process. For more information, check our Privacy Policy.
  • Trusted AI Providers: By using reputable AI sources like OpenAI and MistralAI, we ensure the reliability and accuracy of the AI-generated outputs. Our commitment extends to regular updates and monitoring of these AI integrations.

Incorporating these AI technologies, ISMS Policy Generator upholds a high standard of data security and privacy, while effectively utilizing AI to deliver quality, customized information security policies.


Do the organization’s personnel and partners receive AI risk management training? #

Yes, Better ISMS, the parent company of ISMS Policy Generator, ensures all personnel and partners are well-trained in AI risk management. As leaders in information security and AI, we provide regular, updated training on AI security awareness. Our courses, including upcoming ones on AI secure deployment in startups, align with our comprehensive internal policies.

Our internal AI security documentation includes a specific ChatGPT security policy and detailed procedures for integrating AI APIs, ensuring our team is adept in managing AI risks and adhering to relevant standards and agreements.


Will customer data be used to train artificial intelligence, machine learning, automation, or deep learning? #

No, customer data collected by ISMS Policy Generator is not used to train artificial intelligence, machine learning, automation, or deep learning models.

Our usage of the OpenAI API strictly involves processing customer inputs to generate policies, without contributing to the training of these AI models.

We are committed to the privacy and integrity of customer data, ensuring it is solely used for the intended purpose of providing our service.

Does the organization have an AI Development and Management Policy? #

Currently, our organization does not have a separate AI Development and Management Policy.

Instead, the use of AI, primarily through API calls to OpenAI’s models, is integrated within our standard operational procedures for application development.

These procedures include robust controls to ensure AI integration is secure and efficient:

  • Authentication Requirements: All API calls require secure authentication to prevent unauthorized access.
  • Data Leakage Prevention: We take meticulous measures to ensure no sensitive information is leaked through API call headers or other means.
  • Secure Data Transmission: The transmission of data to and from the AI models is encrypted and closely monitored.
  • Regular Security Audits: Our operational procedures include regular security audits to identify and mitigate potential vulnerabilities.

We align our practices with the highest standards in AI security and are committed to adhering to international standards like ISO 27001. Additionally, we are actively monitoring the development of ISO/IEC 42001:2023 to integrate any relevant guidelines into our processes.

Does the organization have policies and procedures in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems? #

Yes, our organization has established policies and procedures for managing human-AI configurations and the oversight of our AI systems, primarily used for generating information security policies from user inputs. The key aspects of our approach are:

  • Oversight: The founder directly oversees the integration of AI in our generators, ensuring a high level of scrutiny and control.
  • Testing and Quality Assurance: We conduct intensive testing before and after releasing or updating our generators. This includes rigorous checks when changes are made to API calls, to ensure consistent and reliable output.
  • Procedure Consistency: Our procedures are designed to ensure that instructions for API calls are consistent across different generators. This helps maintain uniformity in our service delivery.
  • Regular Procedure Review: Procedures related to AI integration and management are reviewed and updated quarterly. This allows us to adapt to new developments and maintain alignment with best practices in AI and information security.

This structured approach ensures that we effectively manage our AI integrations, with clear responsibilities and robust oversight mechanisms in place.


Who is the third-party AI technology behind your product/service? #

Our product leverages advanced AI technology from two key third-party providers:

  1. OpenAI API: A major component of our service, OpenAI’s API, offers cutting-edge AI capabilities. Known for its sophisticated language models, it plays a crucial role in generating accurate and comprehensive information security policies based on user inputs.
  2. MistralAI API: To complement our service and adhere to regional preferences, we also start to integrate MistralAI, an EU-based AI provider. This ensures we have a diverse range of AI capabilities and align with specific regional requirements and standards.

Utilizing these two distinct yet complementary AI technologies allows us to offer a robust and versatile service, catering to a wide range of needs and regulatory environments.

Has the third-party AI processor been appropriately vetted for risk? If so, what certifications have they obtained? #

Regarding OpenAI, our current main AI provider, their API complies with the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Additionally, the OpenAI API has been evaluated by a third-party security auditor and is SOC 2 Type 2 compliant. This compliance demonstrates OpenAI’s commitment to maintaining high standards of data security and privacy. The OpenAI API undergoes annual third-party penetration testing to identify and address security weaknesses, ensuring a robust security posture against potential threats​​.

MistralAI is also keen on data protection and especially GDPR compliance, after a thorough review of their privacy policy and data processing agreement.

Does the organization implement post-deployment AI system monitoring, including mechanisms for capturing and evaluating user input, appeal and override, decommissioning, incident response, recovery, and change management? #

Our organization has established robust procedures for post-deployment AI system monitoring:

  1. User Feedback and Appeals: We actively engage with user feedback through a contact form in our emails, live chat, and direct email support. This allows us to address any issues or inputs related to AI-generated content effectively.
  2. Incident Response and Recovery: We have alert systems for issues in the policy sending process. While we are continually working to enhance our monitoring, especially for API call issues, current incidents are promptly addressed through detailed troubleshooting and testing.
  3. Change Management Process: Changes that impact users are communicated via email. This ensures users are kept informed about updates and modifications in our AI systems.
  4. Decommissioning of AI Components: Our decommissioning process involves a careful phase-out of older AI components. We introduce updates in a controlled test environment, followed by a gradual replacement of older models, ensuring seamless transition and service continuity.

Our approach ensures that our AI systems are not only effective but also adaptable and responsive to user needs and technological advancements.

Does the organization communicate incidents and errors to relevant AI actors and affected communities and follow documented processes for tracking, responding to, and recovering from incidents and errors? #

Our organization has a proactive approach to managing incidents and errors related to AI systems:

  1. Community Engagement: We are active in the OpenAI community forum, which serves as one of the platforms for discussing, reporting, and staying informed about AI-related incidents and developments.
  2. Documented Incident Response Policy: We have a general incident response policy in place, which is regularly updated to address potential AI-specific incidents. This policy guides us in tracking, responding to, and recovering from incidents effectively.
  3. Communication with Stakeholders: In the event of incidents affecting users or requiring engagement with our AI providers, we have protocols to communicate and address these issues promptly. While such instances have not occurred, we are prepared to take necessary actions to ensure transparency and resolve any problems efficiently.

Our commitment to actively managing AI-related incidents is part of our broader dedication to maintaining a secure and reliable service for our users and partners.

Does your company engage with generative AI/AGI tools internally or throughout your company’s product line? #

Yes, our company actively engages with generative AI tools, both internally and within our product line. A prime example is our interaction with OpenAI’s ChatGPT for various tasks, including drafting responses like this one. When utilizing these AI tools, we adhere to strict guidelines to ensure data privacy and security:

  • No Personal or Sensitive Information: We never input personal or sensitive information into these AI systems. Our usage is confined to general queries and internal process assistance, ensuring that user data and sensitive information remain confidential.
  • Internal Process Enhancement: The AI tools are primarily used to enhance our internal processes, helping us streamline operations and improve efficiency.
  • Product Development: In our product line, AI tools contribute to innovation and development, but always under the framework that prioritizes user privacy and data security.

This approach allows us to leverage the benefits of advanced AI technologies while upholding our commitment to data protection and privacy standards.


If generative AI is incorporated into the product, please describe any governance policies or procedures. #

Our ISMS documentation generator, which utilizes OpenAI’s API, indeed handles information that can be sensitive or confidential. Recognizing this, our governance policies and procedures are rigorously designed to ensure the utmost security:

  1. Robust Security of OpenAI API: We rely on the inherent security measures of the OpenAI API, which include advanced encryption and strict data handling protocols. This ensures that any sensitive information processed through the API is safeguarded against unauthorized access and breaches.
  2. Stringent Internal Procedures: Our internal procedures are tailored to handle sensitive information security management data. These procedures include robust security controls. Your account is protected by MFA. No one can access your data and generated policies. Our database is encrypted at rest on AWS RDS (AES-256). Our app is constantly monitored by a security provider. We implemented key security controls from the designing to deploying the tool. 
  3. Data Handling Compliance: We adhere to the highest standards of data privacy and security, in line with GDPR and other relevant regulations. This compliance is reflected in how we handle, process, and store any data passing through the AI systems. Further details in our privacy policy.
  4. Continuous Monitoring: Our systems are under continuous monitoring to ensure our security measures are effective and up-to-date, especially when dealing with sensitive information.
  5. Employee Training and Awareness: Our team is regularly trained on the latest data security practices and the responsible use of AI technologies, emphasizing the importance of handling sensitive information with utmost care.

These governance policies ensure that while we leverage the capabilities of generative AI, we maintain a steadfast commitment to the security and confidentiality of the information processed through our systems.

Controls to Ensure Secure Data Transmission and Segmentation #

Our platform, developed using advanced web application technology, upholds rigorous standards to safeguard data transmission and maintain distinct data segmentation:

  1. Data Encryption: We employ AES 256 encryption for data in transit, ensuring secure communication and protection against unauthorized access.
  2. User-Specific Access in Dashboard: In our user dashboard, workflows are designed to verify that the current user is the owner of a specific policy before allowing access to policy generation forms. This check ensures that users only access their own data and inputs, maintaining strict data segmentation.
  3. Privacy Rules for Data Visibility: We have established strong privacy rules within our application. These rules are tailored to prevent any cross-user data visibility, ensuring that each user’s data is private and accessible solely to them.
  4. API Security: Secure authentication protocols are in place for all API interactions, particularly with critical services. This maintains the integrity and confidentiality of data as it is processed.
  5. Robust Access Control: Access to our platform is fortified with multi-factor authentication (MFA), adding an additional layer of security. This measure significantly reduces the risk of unauthorized access.

Through these comprehensive security measures, we ensure the safe transmission of data and maintain a high level of data segmentation, effectively safeguarding each customer’s information within our platform.

What are your feelings
Updated on 25 May 2024