NEA explores use of artificial intelligence in nuclear regulation

The NEA Working Group on New Technologies convened a workshop on March 25–26, focusing on how artificial intelligence can be applied to regulatory oversight and internal operations within nuclear authorities.
Summary

  • NEA workshop explored real-world AI applications in nuclear regulation, with case studies from 15 member countries highlighting current tools and use cases
  • Regulators stressed the need for structured AI frameworks, clear success metrics, and human oversight in decision-making
  • On-premise AI models emerged as a key option to address cybersecurity, data sovereignty, and data protection concerns

The discussions centred on practical deployment rather than theory, with participants examining how existing tools can fit into regulatory workflows.

The event brought together nuclear regulators and AI specialists from 15 NEA member countries, alongside representatives from international organisations. Attendees shared case studies showcasing AI systems already in use or under development across regulatory bodies.

Examples presented during the sessions included generating summaries and presentations using AI, improving simulation capabilities, and extracting relevant information from large volumes of regulatory documents.

These demonstrations led to detailed exchanges on implementation challenges, lessons learned, and ways to identify high-value applications.

Key takeaways on AI deployment in nuclear regulation

Participants highlighted several key takeaways. There is a clear need to establish structured AI frameworks within regulatory bodies, supported by defined procedures and guidance.

Well-scoped projects were seen to perform more effectively, while clear success criteria for AI tools and initiatives were considered essential.

On-premise models were identified as a possible way to address concerns related to cybersecurity, data sovereignty, and data protection. At the same time, human expertise remains central to decision-making and to interpreting AI-generated outputs.

The workshop encouraged open comparison of national approaches, with regulators sharing implementation experiences and identifying common concerns. The exchanges also pointed to areas where closer international cooperation could help address shared challenges.

Global collaboration and next steps for regulators

Mr. Eetu Ahonen, Vice-Chair of the WGNT, led the discussions and emphasised the value of collaboration across jurisdictions.

“This workshop demonstrated the value in international collaboration. Every regulator is exploring AI from a different angle, but the experiences we have with implementation of AI tools, data security challenges, and ensuring human oversight are remarkably similar. By sharing openly and learning from each other, we are strengthening our ability to use AI responsibly and efficiently to improve nuclear safety.”

The WGNT, which organised the event, serves as a platform for regulators and technical support organisations to exchange insights on overseeing emerging technologies throughout their lifecycle. Its work supports the development of shared understanding and helps identify pathways toward aligned regulatory positions.

The NEA plans to publish a dedicated brochure summarising the workshop’s findings, including key challenges, lessons learned, and recommended practices for integrating AI into regulatory processes.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

ChatGPT ads move into Australia and New Zealand: Free and Go users first, paid plans stay ad-free

OpenAI expanded ChatGPT advertising on April 17, 2023 to Australia, New Zealand, and Canada for Free and Go users, with no ads for paid users. This marks the second pathway toward AI commercialization and takes into account business and regulatory risks, where the presence of ads can promote paid conversions.

ChainNewsAbmedia41m ago

Hyundai Motor Group Reorganizes Around AI and Robotics, Targets 30,000 Atlas Robots by 2030

Hyundai Motor Group is restructuring to focus on AI and robotics, reducing traditional operations. It plans a $34.3 billion investment in robotics by 2030 and aims to launch a robotics-as-a-service model, collaborating with Google DeepMind and NVIDIA.

GateNews3h ago

China to Test 300+ Humanoid Robots in Beijing Half-Marathon on April 19

Beijing's second robot half-marathon features over 300 humanoid robots from 70 teams competing on a 21-km course. Advances in autonomous movement are highlighted, with 40% operating without control. China dominates the humanoid robot market, despite production challenges.

GateNews8h ago

US Seeks Increased Namibian Uranium Imports to Power AI-Driven Nuclear Plants

The U.S. is contemplating increased uranium imports from Namibia to support nuclear energy for AI data centers, as China dominates Namibia's uranium sector. Rising uranium prices are reviving mining interests, despite water supply challenges in the arid region.

GateNews9h ago

OpenAI Executives Bill Peebles and Kevin Weil Depart in Leadership Reshuffle

OpenAI executives Bill Peebles and Kevin Weil announced their exits, part of a series of leadership changes as the company decentralizes its operations. Their departures follow several other high-profile exits and a shift in company structure.

GateNews10h ago

Zoom Partners with World to Add Deepfake Detection Using Facial Recognition

Zoom has partnered with Sam Altman's World to launch a feature detecting real participants versus AI deepfakes during video calls. This aims to combat increasing deepfake fraud, with verification options for hosts and participants.

GateNews10h ago
Comment
0/400
No comments