LG AI Research Ethical Priorities and IEEE SA Partnership
Summary
LG AI Research has published its 2025 Ethical Priorities, detailing efforts to identify and mitigate AI risks, including a partnership with IEEE SA for AI system certification. The report highlights the identification of 219 potential AI risks and the expansion of their AI risk taxonomy.
What changed
LG AI Research has released its 2025 Accountability Report on AI Ethics, outlining advancements in operationalizing ethical AI principles. Key initiatives include scaling AI Ethical Impact Assessments, which reviewed approximately 60 projects and identified 219 potential risks, with 82% already addressed. The organization also expanded its AI risk taxonomy to 226 categories and enhanced model safety verification through red-teaming and benchmarks. A significant development is the formal collaboration with IEEE Standards Association (IEEE SA), where LG became an Authorized Assessor for the IEEE CertifAIEd™ program, enabling official assessments of AI systems against pillars like Accountability, Privacy, and Transparency.
This notice serves as an informational update on LG AI Research's proactive approach to AI ethics and risk management. The partnership with IEEE SA signifies a commitment to global AI standardization and verification. While this document does not impose new direct obligations on external entities, it highlights industry trends in AI governance and ethical AI development, particularly concerning risk identification, mitigation, and third-party certification. Companies involved in AI development or deployment may find the methodologies and partnership models discussed relevant for their own compliance and risk management strategies.
Source document (simplified)
LG AI Research’s 2025 Ethical Priorities: Building AI That Earns Trust
- IEEE Standards Association (IEEE SA)
- 19 March 2026
- 4 minute read
- -
- In its 2025 Accountability Report on AI Ethics, LG AI Research highlighted its focus on translating ethical principles into operational practice. The organization strengthened its internal governance, expanded tools that help identify risk, and emphasized processes that make AI systems more reliable and fair.
Key initiatives included:
- Scaled AI Ethical Impact Assessments: Roughly 60 projects underwent structured review, identifying 219 potential risks and closing about 82% of them. The remainder are linked to future projects, ensuring that mitigation measures continue to be addressed over time. This process helps ensure that ethical considerations influence projects early, rather than after deployment.
- Enhanced AI risk taxonomy: LG expanded its K-AUT framework to 226 detailed risk categories, covering areas such as privacy, social safety, and emerging risks in advanced AI systems. This taxonomy guides consistent evaluation across teams.
- Model safety verification: The organization employed internal and external red-teaming and used its KGC-SAFETY benchmark to test models across multilingual and adversarial scenarios, improving resilience and reducing unsafe outputs.
- Data provenance and compliance: Using EXAONE Nexus, LG introduced automated data tracing that achieved 81% accuracy and operated 45× faster than human review—helping identify copyright risks in large training datasets. Together, these efforts illustrate LG AI Research’s commitment to building systems that are transparent in their development, thoughtful in their use of data, and proactive about the real-world risks AI can introduce.
The IEEE SA Partnership: Strengthening Verification and Raising the Bar
A major milestone highlighted in the report is LG AI Research’s formal collaboration with the IEEE Standards Association (IEEE SA). In 2024, LG became the first organization in Korea qualified as an Authorized Assessor for the IEEE CertifAIEd™ program, a global initiative that evaluates AI systems across the pillars of Accountability, Privacy, Transparency, and Algorithmic Bias. Through this partnership:
- LG AI Research began conducting official IEEE CertifAIEd assessments, applying the program’s structured verification process to real AI products.
- The collaboration supported the certification of LG Electronics’ ThinQ ON, which became the first AI product globally to receive IEEE CertifAIEd™—a result verified through IEEE SA’s independent multi-stage review process. The report details how this certification process works, from determining assessment scope to documentation review and IEEE SA’s independent validation. CertifAIEd™ provides a repeatable way for companies to demonstrate that their AI meets recognized ethical benchmarks before entering the market.
LG’s participation in this program is significant not just for the company, but for the broader AI ecosystem. By applying standards-based evaluation internally and sharing insights externally, LG is helping advance the practical adoption of AI ethics frameworks beyond regulatory compliance.
What’s Next: How Other Organizations Can Pursue Responsible AI Certification
As global expectations for safe and trustworthy AI continue to grow, more organizations are exploring structured ways to demonstrate responsible development. IEEE SA’s CertifAIEd™ program provides one such pathway, offering:
- Assessment of AI systems against established ethical criteria
- Professional training for teams involved in AI design, risk, and compliance
- Curriculum options for organizations and academic institutions that want to integrate responsible AI concepts more formally These options allow companies to start where it makes sense for them—whether by validating a product, training internal experts, or laying the foundation of knowledge across their workforce.
The example set by LG AI Research in 2025 shows the value of combining internal governance with external verification: organizations gain clarity, customers gain confidence, and the industry gains a more consistent standard for responsible AI.
A Path Forward
LG AI Research’s progress reflects a larger shift underway: moving from high-level ethical goals to concrete, testable practices. Its collaboration with IEEE SA demonstrates how independent assessment can complement internal governance, offering transparency and reinforcing accountability at scale.
Organizations seeking to strengthen their own responsible AI programs can look to this model, pairing in-house controls with recognized external standards, to build systems that earn trust and meet global expectations.
Learn more about IEEE CertifAIEd and how it can help strengthen your AI solution or become an IEEE authorized collaboration partner.
Share this Article
Tags:
- AI
- Artificial Intelligence
- CerifAIEd
- Ethics in Technology
- Ethics of Autonomous and Intelligent Systems
- technology
IEEE Standards Association (IEEE SA)
The IEEE Standards Association (IEEE SA) is a collaborative organization, where innovators raise the world’s standards for technology. IEEE SA provides a neutral and open environment that empowers innovators - across borders and disciplines - to shape and improve technology.
We enable the collaborative exploration of emerging technologies, the identification of challenges and opportunities to address, and the development of recommendations, solutions and technology standards that solve market-relevant problems.
Together, we are raising the standards that benefit industry and humanity; making technology better, safer and sustainable for the future.
Read full
biography Back to top
Leave a Reply
Your email address will not be published.
Comment Name Email
Related Posts
- 10 March 2026
- 4 minute read
Protecting the Grid in the Age of Data Center Growth
IEEE Standards Association (IEEE SA)
Explore the key trends shaping online age verification in 2026, including biometric age estimation, privacy-preserving technologies, and standards-based approaches.
- 5 March 2026
- 4 minute read
Trends in Online Age Verification for 2026
IEEE Standards Association (IEEE SA)
Explore the key trends shaping online age verification in 2026, including biometric age estimation, privacy-preserving technologies, and standards-based approaches.
- 3 March 2026
- 6 minute read
2026 Healthcare and Life Sciences Trends: What to Watch in AI Delivery, Medical Device Cybersecurity and Digital Therapeutics
IEEE Standards Association (IEEE SA)
Healthcare systems are under real pressure as patients seek faster access to care, clinicians manage heavier workloads, and hybrid care models continue to expand. New digital tools and devices are…
Recent Posts
- 19 March 2026
- 4 minute read
LG AI Research’s 2025 Ethical Priorities: Building AI That Earns Trust
- 10 March 2026
- 4 minute read
Protecting the Grid in the Age of Data Center Growth
- 5 March 2026
- 4 minute read
Trends in Online Age Verification for 2026
Popular Posts
- 11 January 2021
- 5 minute read
What are Standards? Why are They Important?
- 24 June 2021
- 5 minute read
How Can Quantum Computing and Artificial Intelligence Transform the Healthcare Industry?
- 13 January 2021
- 8 minute read
How are Standards Developed?
Named provisions
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Telecom & Technology alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when IEEE Standards News publishes new changes.