What Does the National AI Strategy Mean for the Nuclear Industry?
With the UK Government setting out its AI strategy last year, there has been time to take stock and reflect on what it means for industry, particularly Nuclear, where the uptake of novel and innovative solutions can be more challenging. Key ambitions set out in the strategy are to “make Britain a global AI superpower” and to “build the most pro-innovation regulatory environment in the world”. A vital element of these ambitions is around ‘Governing AI Effectively’, with specific actions listed over the course of 2022 to publish papers on AI assurance and develop AI standards.
The Nuclear industry has historically, gone for the more ‘tried and tested’ end of the scale when it comes to technology, however, with realms of data collected through the assets' operational history and high-value use cases to support the development of best practices in AI assurance, it is ideally placed to capitalise on the objectives and ambitions presented in the Government’s strategy. Additionally, as well as being in a position to become a leader on AI assurance, there are also substantial benefits to be unlocked in terms of improving safety and reducing costs.
Making Britain a Global AI Superpower
If the Nuclear industry is to achieve these ambitious goals,there are challenges to overcome – implementation of AI in a highly regulated, high consequence environment, the implications this has around assurance and how comfortable we feel in placing reliance on decisions made by a ‘machine’, especially in the context of nuclear safety.
In a safety case capacity, it is recognised that whilst all computer-based systems are strictly deterministic, AI and ML systems work in a large and complex input space. Consequently, it is usually more helpful to consider how the system reacts to groups of inputs, in which case its behaviour appears probabilistic. As such, assurance is needed to evaluate products on a bounding scenario and to challenge the system with a sufficient set of representative scenarios to build confidence in their effectiveness. This type of testing will need further exploration and form a key component in future assurance. Another critical component in providing assurance from a safety perspective will be demonstrating production excellence, this will involve understanding the production process; in particular the quality controls and testing completed as part of development. As a minimum. this could involve answering a series of set questions to give confidence that the software has been developed in an appropriately controlled and quality rich manner.
Even with both these components considered, testing activities will need to be very clear on the limitations and constraints of the testing approach and what potential software failure modes can / cannot detect. Accepting there will always be ‘cannots’ and clearly demonstrating that their consequences are understood and can effectively be mitigated against will be essential.
Building Trust Through Human-in-the-Loop AI
Another consideration for future applications is on maintaining the ‘human-in-the-loop’ aspect to help build trust in the system in the early applications, whilst the use of AI and ML go through the new technology curve from being innovative, to becoming tried and tested – where trust has been built with the operator. However, it should be noted that simply having a human check outputs is not a guarantee of success. Consideration will need to be given to human factors , i.e. avoiding human actions that are required under time pressure or are too repetitive.
Cognisance must also be given to the fact that AI assurance is at the forefront of development in both academia and wider industries where AI adoption is more mature. Links across academia and wider industries should be built to maximise the effectiveness of AI deployment.
Looking to the Future
The National AI Strategy underpins the governments long-term commitment and ambitions of enhancing the UK’s digital ecosystem. While there are challenges to overcome for the Nuclear sector to fully bolster its AI capabilities, it serves as an important innovation opportunity to develop transformative technologies that will have a profound impact on the sector. Ada Mode has recently been supporting Sellafield Ltd in developing their AI strategy, setting out a roadmap to maximise the benefit of AI across the site. This has included steps and advice around governance arrangements and regulator engagement, with this advice tailored around the wider site needs and challenges associated with decommissioning a large nuclear asset base. We see that there is great benefit to be had from developing AI validation strategies for other nuclear assets or indeed within wider highly regulatory sectors,especially now to ensure the right frameworks and building blocks can be put in place pre-emptively rather than retrospectively.