Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
The AI Integrity and Safe Use Foundation (AISUF) is a non-profit organization dedicated to developing an open AI transparency framework, offered free of charge, for industry as well as federal, state, and municipal governments. Our mission is to empower organizations to create and acquire secure, resilient, and ethically sound AI systems that meet regulatory requirements. By providing comprehensive standards tailored to both general and critical infrastructure applications, AISUF establishes a trusted foundation for the safe integration of AI technologies across all sectors, fostering innovation while protecting intellectual property.
The leadership team of the AI Integrity and Safe Use Foundation (AISUF) consists of experts at the forefront of AI security, ethics, and technology innovation. With diverse backgrounds spanning cybersecurity, AI development, critical infrastructure, and policy-making, the team is committed to driving the creation of robust frameworks that ensure the safe, secure, and responsible use of AI systems. Their collective expertise enables AISUF to set industry-leading standards that foster trust, protect intellectual property, and promote the ethical deployment of AI across all sectors.
Dmitry, a Canadian-Israeli entrepreneur and cybersecurity expert, brings over two decades of experience across application security, cloud architecture, DevOps, and DevSecOps, with a focus on automating cyber-defense mechanisms. As a founding partner of the AI Integrity and Safe Use Foundation (AISUF), Dmitry plays a pivotal role in adva
Dmitry, a Canadian-Israeli entrepreneur and cybersecurity expert, brings over two decades of experience across application security, cloud architecture, DevOps, and DevSecOps, with a focus on automating cyber-defense mechanisms. As a founding partner of the AI Integrity and Safe Use Foundation (AISUF), Dmitry plays a pivotal role in advancing AI security frameworks to support resilient, transparent, and secure AI systems for enterprises.
In 2016, Dmitry co-founded Cybeats, where he now leads innovation, technology, and product development as CTO, focusing on cutting-edge solutions for cybersecurity challenges. Recognized for his contributions to the SBOM (Software Bill of Materials) standard, Dmitry joined the NTIA group in 2018 to help shape the SBOM framework and, in 2020, developed SBOM Studio to empower organizations to efficiently manage and secure their software supply chains. His work has extended to pioneering efforts in the emerging fields of AIBOM (AI Bill of Materials), CryptoBOM, VEX, and CSAF, actively participating in industry working groups to define future standards.
Committed to community advancement, Dmitry co-founded the Security Architecture Podcast in 2020 during the COVID-19 pandemic, offering expert insights and fostering knowledge sharing in cybersecurity. As a thought leader, Dmitry’s work in secure architecture and automation continues to influence best practices and shape the future of software and AI security on a global scale.
Helen is a leading voice in the industry for AI security and transparency, with specialized expertise in cybersecurity, software supply chain security, and application security. As a senior architect in application and cloud security, Helen has a deep background in designing and implementing secure-by-design principles and secure operati
Helen is a leading voice in the industry for AI security and transparency, with specialized expertise in cybersecurity, software supply chain security, and application security. As a senior architect in application and cloud security, Helen has a deep background in designing and implementing secure-by-design principles and secure operational practices for AI-driven systems in large enterprise environments. Her work ensures that AI systems are both transparent and resilient, setting a new standard for secure AI integration within complex, global infrastructures. As a founding partner of the AI Integrity and Safe Use Foundation (AISUF), Helen leads efforts to develop standards, frameworks, and best practices.
Helen is also a co-leader of the AIBOM (AI Bill of Materials) Tiger Team under CISA’s SBOM working groups, where she drives industry efforts in AI software transparency and secure AI software operations. With a commitment to advancing AI security, she has been instrumental in developing frameworks that enable organizations to build, manage, and operate AI applications with robust security and ethical integrity at every stage of the lifecycle.
Named one of the Top 20 Canadian Women in Cybersecurity, Helen co-founded Leading Cyber Ladies, a global network dedicated to empowering women in cybersecurity. She advises emerging startups and is a respected speaker at major industry events, sharing her insights on AI integrity, secure software supply chain practices, and the future of secure AI applications.
Copyright © 2024 AI Integrity & Safe Use Foundation (AISUF) - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.