AI and Cybersecurity Standards for Trust and Risk Reduction
Summary
IEEE Standards News reports on the evolving landscape of AI certification and cybersecurity requirements, highlighting a global shift towards mandatory compliance and the adaptation of security standards to address AI-related risks. The article discusses emerging trends in AI compliance, cybersecurity standards incorporating AI risks, and the development of quantum-safe and privacy-focused AI certifications.
What changed
This IEEE Standards News blog post discusses the increasing importance of AI certification and cybersecurity requirements as foundational expectations across industries, education, and regulation. It highlights that while formal certifications are not yet mandatory, legal and regulatory obligations around AI use are rapidly becoming so. The post notes that cybersecurity standards are adapting to include AI-related risks, with organizations like IEEE SA developing projects and standards to address these emerging threats. It also touches upon the development of quantum-safe and zero-trust certifications, as well as privacy and ethical AI certifications.
For regulated entities, this indicates a growing need to proactively manage AI risks and ensure compliance with evolving legal and technical standards. Companies should monitor developments in AI regulation and cybersecurity, particularly those related to AI compliance, quantum computing threats, and data privacy. Engaging with standards bodies like IEEE SA can provide frameworks for demonstrating trustworthiness and mitigating AI-related risks. The trend suggests that demonstrating compliance through certifications and adherence to evolving standards will become crucial for market access and consumer trust.
What to do next
- Monitor evolving AI compliance requirements and cybersecurity standards.
- Review existing cybersecurity frameworks for AI-related risks.
- Investigate emerging certifications for quantum-safe and ethical AI.
Source document (simplified)
Artificial Intelligence (AI) and Cybersecurity: Emerging Risks, Big Opportunities and the Path to Trust
- Srikanth Chandrasekaran
- 26 March 2026
- 5 minute read
- -
- Artificial Intelligence (AI) certification and cybersecurity requirements are rapidly evolving from emerging best practices into foundational expectations across industries, education and regulation. As AI systems become more prevalent and complex, institutions and governments across the globe are responding with formal training mandates and expanded government frameworks aimed at managing risk, accountability and security.
These developments reflect a global shift toward certifications and requirements, which consumers and businesses increasingly rely on to demonstrate trustworthiness and guide technology use.
Trends in AI Certification and Cybersecurity
As AI Certification and Cybersecurity emerge, several trends are shaping the global landscape in 2026:
- AI Compliance Requirements are Becoming Mandatory While no certification is required yet, legal and regulatory obligations around AI use are rapidly becoming mandatory. Organizations are increasingly expected to demonstrate compliance with emerging laws, governance frameworks, and risk management standards as AI becomes more widely regulated.
- Cybersecurity Standards Adapt to AI Security certifications now include checks for AI-related risks. IEEE Standards Association (IEEE SA) provides projects and standards that serve as a framework to help prove that an organization can handle new AI-powered threats and reduce risk.
- Quantum Safe & Zero Trust Certifications Certifications now cover encryption designed to withstand future quantum computing threats, as well as advanced identity and access models such as AI-powered Zero Trust. These certifications give organizations and users confidence that systems are secure today, while remaining prepared for emerging risks. The IEEE P1943 Standard for Post-Quantum Network Security outlines how existing network protocols can be adapted to remain secure against future quantum threats using hybrid cryptography and quantum-safe protections.
- Privacy & Ethical AI Certifications As AI adoption accelerates, new certifications are emerging to verify that AI systems protect user privacy, mitigate bias, and operate transparently. These certifications focus on the ethical impact of AI, not just technical performance and are increasingly important as organizations work to build trust and meet evolving expectations around responsible AI use.
- Workforce & Organizational Readiness AI and cybersecurity certifications are expanding beyond individual skills to address organizational readiness. New onboarding and training programs help employees and teams responsible manage, secure, and deploy AI across an organization, ensuring consistent governance and reduce risk as AI adoption scales.
Use Cases for Good
AI certification and cybersecurity extend beyond theory, offering practical frameworks that help organizations manage risk, protect data and deploy AI responsibly. These real-world applications are examples that show how responsible AI can drive meaningful impact across critical sectors.
- Healthcare: AI is transforming healthcare systems by enhancing diagnostic accuracy, streamlining administrative tasks and enabling predictive analytics that support better patient outcomes. Advanced security tools can also detect unusual network activity that resembles ransomware and intervene immediately, helping safeguard sensitive patient data before damage occurs.
- Banks: As scammers leverage AI, financial institutions are responding with equally advanced AI-driven defense systems. These tools can significantly reduce account takeover attempts and identify fraudulent behavior in real time, offering customers stronger protection from financial threats.
- Email Platforms: Email providers and search engines rely on machine learning to identify harmful content, block phishing attempts and filter out deceptive or malicious messages. These security layers work quietly in the background, protecting billions of users every day.
How IEEE SA is Supporting AI Certification & Cybersecurity Advancements
AI is rapidly transforming both technology and everyday life. Its ability to generate insights at unprecedented speed makes it a powerful tool, but when misused or poorly implemented, it can also spread inaccurate information and create new vulnerabilities. That’s why trustworthy, secure and ethical AI systems are more important than ever.
IEEE SA is helping shape this future by developing standards that strengthen trust, interoperability and security. Our work includes performance metrics, testing guidelines, and frameworks that organizations can adopt to build responsible and resilient systems.
Key initiatives include:
- IEEE Ethics for AI System Design Training, which helps professionals to integrate ethical principles along with functional values in designing and developing systems into AI development.
IEEE CertifAIEd Product Certification, offering small and mid-sized organizations an accessible pathway to evaluate ethical AI implementation without requiring a PhD in machine learning or an expensive budget.
We also support organizations through cybersecurity frameworks such as:IoT Sensor Devices Cybersecurity Framework, which guides the development of IoT sensor devices and endpoint security.
IEEE/UL Standard for Clinical Internet of Things (IoT) Data and Device Interoperability with TIPPSS- Trust, Identity, Privacy, Protection, Safety and Safety, which provides a comprehensive model (trust, identity, privacy, protection, safety, and security) for safeguarding clinical IoT systems.
In addition to standards and certification efforts, IEEE SA supports hands-on learning and community engagement through initiatives such the upcoming IEEE SA Cybersecurity Hackathon 2026: “TIPPSS & Tricks: Hack the Threat.” The hackathon brings together innovators, professionals, students and cybersecurity enthusiasts to explore emerging threats, the protection of AI systems and the advancement of global standards. Registration for the hackathon starts on April 14. If interested, visit the event website to learn more and sign up.
IEEE SA remains committed to supporting organizations as they navigate AI integration and strengthen cybersecurity practices. To learn more about AI certification, cybersecurity initiatives, or opportunities to participate in standards development, visit the IEEE SA website and the IEEE Standards & Projects for Cybersecurity page.
Share this Article
Tags:
- AI
- Artificial Intelligence
- cybersecurity
- Ethics in Technology
- Ethics of Autonomous and Intelligent Systems
- technology
Srikanth Chandrasekaran
Sr. Director; Foundational Technologies Practice Lead, IEEE Standards Association (IEEE SA) - Sri is the Practice Lead and Senior Director for the IEEE SA Foundational Technologies Practice. In this role, Sri is focused on developing key programs that address core issues of security, identity, trust and building end-to-end trustworthy devices and systems across emerging areas such as IoT, Smart Cities, Sensors and Blockchain. Sri also heads the standardization activities for IEEE SA for the Asia Pacific region. Sri leads the IEEE Blended Learning Program effort, driving the development of an eLearning platform, focused on bridging skills for students in current and emerging technologies as well as lateral skilling of industry professionals.
Read full
biography Back to top
Leave a Reply
Your email address will not be published.
Comment Name Email
Related Posts
- 24 March 2026
- 4 minute read
Shaping the Future of Responsible AI: Key Takeaways from the India AI Impact Summit 2026
IEEE Standards Association (IEEE SA)
Healthcare systems are under real pressure as patients seek faster access to care, clinicians manage heavier workloads, and hybrid care models continue to expand. New digital tools and devices are…
- 19 March 2026
- 4 minute read
LG AI Research’s 2025 Ethical Priorities: Building AI That Earns Trust
IEEE Standards Association (IEEE SA)
Healthcare systems are under real pressure as patients seek faster access to care, clinicians manage heavier workloads, and hybrid care models continue to expand. New digital tools and devices are…
- 10 March 2026
- 4 minute read
Protecting the Grid in the Age of Data Center Growth
IEEE Standards Association (IEEE SA)
Explore the key trends shaping online age verification in 2026, including biometric age estimation, privacy-preserving technologies, and standards-based approaches.
Recent Posts
- 26 March 2026
- 5 minute read
Artificial Intelligence (AI) and Cybersecurity: Emerging Risks, Big Opportunities and the Path to Trust
- 24 March 2026
- 4 minute read
Shaping the Future of Responsible AI: Key Takeaways from the India AI Impact Summit 2026
- 19 March 2026
- 4 minute read
LG AI Research’s 2025 Ethical Priorities: Building AI That Earns Trust
Popular Posts
- 11 January 2021
- 5 minute read
What are Standards? Why are They Important?
- 24 June 2021
- 5 minute read
How Can Quantum Computing and Artificial Intelligence Transform the Healthcare Industry?
- 13 January 2021
- 8 minute read
How are Standards Developed?
Named provisions
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Telecom & Technology alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when IEEE Standards News publishes new changes.