VISO TRUST
Trustworthy AI
Launched in 2020, VISO TRUST, powered by Artifact Intelligence, transforms third-party cyber risk management through AI. In just a 5-minute web session, our AI engages third parties, extracting relevant control information from security artifacts. The platform automates due diligence, providing near 100% vendor adoption, a 90% reduction in assessment time, and continuous monitoring for compliance. With expanded features like generative AI, lifecycle automation, risk metrics dashboards, and rapid risk analysis, VISO TRUST sets a new standard, reducing third-party security risk by over 95% and enabling organizations to make informed decisions swiftly and responsibly. Our groundbreaking technology and platform expansions were announced on October 25, 2023, ushering in a new era of responsible and trustworthy AI use in security and risk management.
At VISO, we are incredibly excited about the value and efficiency that AI brings to our customers, all while recognizing the associated risks and complexities. We believe that the benefits of AI should be accessible to everyone. However, merely delivering the technological capabilities of AI is not sufficient; we also bear a responsibility of ensuring that AI is trustworthy for all. We take this responsibility seriously and are committed to security and resiliency, transparency, responsibility, and accountability in our utilization of AI across our products and features.
Our AI program is primarily aligned with the NIST AI Risk Management Framework, OWASP (Top 10 for Large Language Models and AI Security and Privacy Guide), and other relevant frameworks. This program is built upon and integrated with our SOC2-attested information security and availability risk management programs and processes. As part of our AI program, we commit ourselves to implementing the following principles that will guide us in responsibly scaling our automation ambition with generative AI. We also commit to adapt our AI efforts responsibly as best practices and regulations continue to evolve.
Security and Resiliency of AI
At VISO TRUST, we recognize the need to go beyond traditional security measures and develop technology and processes to secure AI applications and services, ensuring their secure use. We have established and implemented cyber risk policies, protocols and controls throughout our AI and ML model lifecycle to guide the model development, implementation, monitoring, and validation.
We have defined clear roles and responsibilities for AI development and security and implemented a governance process that aligns with our overall risk management framework.
Our AI system developers are responsible for creating secure-by-design AI systems, employing secure coding practices, undergoing vulnerability scanning and penetration testing, training AI models on clean data, and implementing security controls to protect against potential attacks.
In addition, VISO encrypts data both in transit and at rest utilizing industry standard encryption algorithms. We provide product security features for secure platform usage, including single sign-on, multi-factor authentication, and role-based access management.
We have implemented strong access controls to restrict access to AI systems and sensitive data to authorized personnel only. Our data classification, handling and disposal standard guides the protection and appropriate storage and retention periods of our AI data.
VISO TRUST utilizes AWS’s Bedrock API service as the exclusive subservicer for all our generative AI tooling. We have documented a blanket AWS AI Data Use Opt-Out policy hence prohibiting them from using customer data passed through the VISO Platform to be used for model training.
VISO TRUST carefully evaluates cyber security and privacy posture of any third party provider, which includes all AI related products and services, to ensure they meet our privacy and security standards and requires specific contractual commitments.
We educate our employees about AI security risks, best practices, and incident reporting procedures. Ensure that employees are well-versed in identifying and reporting potential security concerns.
We have expanded our security incident management (including logging and monitoring) capabilities to establish strong security foundations to our AI ecosystems and extend detection, handling and response to bring AI into an organization’s threat model. We continuously monitor emerging AI security threats, vulnerabilities and attack vectors and keep our AI security team up-to-date on the latest threats and adapt your security measures accordingly.
Beyond these security protocols implemented to prevent, protect against, respond to, or recover from attacks, our AI operations are designed with resilience in mind. In the event of an unexpected adverse occurrence or when the AI service is offline for any reason, our AI operations can seamlessly return to normal functionality. This is achieved by relying on our audit team for manual reviews and assessments.
Accuracy and Reliability of AI
VISO TRUST employs an expert-in-the-loop approach to its AI, ensuring that all third-party cyber risk assessments undergo scrutiny from a qualified third-party auditor. This practice enables us to establish guardrails, preventing inaccuracies or unreliable outputs, and ensuring the overall correctness, relevance, and meaningful interpretation of our AI system operations under the expected conditions of use and over a given period. The reviewed results are used as training data to further enhance the accuracy of our AI and ML models. Our experiments and training procedures are implemented and documented in source code stored in our GitHub repository. This encompasses the capturing and reviewing of test results and training data before releasing software versions for customer use. Access to this data is restricted to authorized individuals (from our Machine Learning/Data Team) through access management mechanisms.
This augmentation of AI with human review not only validates but also enhances reliability and data quality.
Transparency, Explainability, Interpretability of AI
VISO TRUST is committed to providing transparency regarding our use of AI and empowering our customers with control over their data. Upon request, VISO TRUST can furnish information about the mechanisms underlying our AI systems’ operation and provide links to publicly accessible resources that detail their development and the combination of associated datasets.
Our system and support teams aim at keeping our Clients informed about the status of AI processing and notifies them of the results generated. Our product presents information in a manner that facilitates a clear understanding of the assessments’ outcomes for our clients, thereby ensuring explainability and interpretability in the use of AI. In the event that clients have questions about understanding or interpreting the results, we offer easy means for them to connect with us and get answers to their questions or engage in discussions to discuss proposed feature changes. Clients are also proactively notified about any significant upcoming changes as part of the product releases through customer support meetings and release notes.
Please note that VISO TRUST does not collect personal information of customer data subjects on behalf of its Clients. We do not use customer data to train or fine tune models for uses that are not directly related to the core features of our product. We do not use customer data to train or fine tune models for marketing or profiling purposes.