WESF plays a role in promoting standardization and eliminating global trade barriers.
IEC, ISO and ITU have formally endorsed a socio-technical approach to standardization with the aim of advancing responsible and inclusive artificial intelligence (AI). Leaders of the three standards development organizations made the commitment at the recent International AI Standards Summit.
A joint statement issued at the event in Seoul reinforced their shared commitment to advancing safe, inclusive and effective international standards for AI, as well as to bridging the AI divide. According to a joint media release, “The statement sets out a joint vision and commitments from the three organizations for how international standards will support the development and deployment of trustworthy AI systems that benefit society, drive innovation and uphold fundamental rights.
In Seoul, the three standards bodies were at pains to stress that formally adopting a socio-technical approach does not diminish their current contribution, but rather paves the way for a deeper, more systematic integration of human factors. While current standards contribute to high-level objectives, such as the UN Sustainable Development Goals, this has often been an implicit outcome of technical work.
After all, standards designed to ensure that technology is safe and trustworthy already go some way towards addressing societal concerns. These topics were also the focus of an IEC Academy webinar last month with the participation of distinguished experts from ISO, IEC and the Global Network Initiative, a non-governmental organization dedicated to combatting censorship and protecting the rights of individuals online.
Cindy Parokkil, the AI policy lead for ISO, defined the core problem. “People often tend to focus on the technical issues: algorithms, data and model,” she said. “But in reality, we know that AI systems are socio-technical in nature.”
She illustrated her point with a real-life story about a university hospital that used AI to predict sepsis in patients. Although technically sound, the tool’s effectiveness relied entirely on whether nurses and doctors trusted its alerts and adapted their workflows accordingly. “Technical innovation only delivered value when the human system around it adapted and co-evolved with it,” Parokkil said.
The conversation then turned to the difficult question of trust. Ian Oppermann, an industry professor and vice president of the IEC, highlighted a fundamental mismatch in our expectations.
“We accept faults and fallibilities [in humans],” Oppermann said. “The difference between our expectations of performance of an algorithm versus expectations of performance of a human being in a similar circumstance is something that we haven't quite come to grips with.”
He warned that without a fair framework for this comparison we risk deploying systems that are technically superior but socially dysfunctional.
The panellists also challenged the business community to rethink its approach to calculating the value of AI. Oppermann argued that focusing on simple metrics like process efficiency is a “false dilemma” that underestimates the transformative potential of AI. Oppermann recalled early business cases for Wi-Fi, which he said were based on saving the cost of a metre of cable rather than imagining a wireless world.
Jason Pielemeier, the executive director of the Global Network Initiative, urged greater vigilance and more inclusivity in the standardization process to address the inherent risks. He emphasized that the speed of standardization is less critical than incorporating flexibility and iterative review.
“What we need to have is agility,” Pielemeier said. “We need to have these feedback loops that allow us to continuously improve and adjust standards so that they can better reflect and address the social realities.”
He made a direct appeal for diverse voices in the standards-making process, telling a global audience that included participants from Burkina Faso to Papua New Guinea, “You are all socio-technical experts.”
The webinar underlined that AI systems are never purely technical. The panellists argued that the impact of new technologies depends on the social, organizational and cultural contexts in which they are implemented. Standards shape how AI is built, used and governed worldwide, so they must reflect not only technical excellence but also real-world conditions. Like the Seoul statement, their message was that the future of AI hinges on standards that prioritize the needs of people.