Artificial Intelligence (AI) regulation has gone mainstream.
Previously the concern of specialist think-tanks, policy influencers and science fiction enthusiasts, a flood of recent interventions show that AI regulation is now a core focus for some of the world’s most powerful organizations and governments.
To kick off 2020 the White House set out its guidance for the regulation of artificial intelligence made up of ten key principles, from building public trust to promoting safety and security.
But rather than control AI development, the announcement and U.S. stance is a way to balance more stringent regulatory positions being considered and implemented by other international bodies like the EU and G7.
To regulate or not to regulate
Beyond the White House, AI took centre stage in technology debates emerging from the World Economic Forum in Davos.
Google’s CEO, Sundar Pichai, argued that AI is unlike anything we have seen before and has greater potential than fire or electricity in improving our lives.
While recognizing the singular power of AI to positively change the world in areas from healthcare to fighting climate change, Pichai also suggested government regulation must play a role in preventing AI from being used for mass surveillance or to negatively impact human rights.
Microsoft’s President, Brad Smith, called for similar regulation, although with a lighter touch.
He compared current EU proposals to ban facial recognition in public spaces to a ‘meat cleaver’ where a ‘scalpel’ would be more appropriate. Elsewhere, human rights groups are pushing back against recent announcements in the UK that facial recognition would be rolled out by police.
Despite the noise, these debates boil down to managing the age-old challenge that comes with new technology—the balance between supporting innovation and progress, and reducing the risk of abuse.
Benefits of AI in the workplace and why it’s here to stay
Much of the debate around AI has centred on how it could be used in the future by public bodies and governments. These are undoubtedly important discussions but what’s often overlooked are uses of AI available to organizations today, and how these should be governed both internally and externally.
With estimates suggesting AI can add an additional $15 trillion to the global economy by 2030, it won’t only be the Silicon Valley giants or politicians taking note of AI’s potential.
One of the most transformative roles AI is playing in enterprises today is unlocking human intelligence. Just 20% of knowledge is currently recorded in businesses—the rest is undocumented, meaning it resides only in an employee’s mind.
AI can use all types of data sources to identify expertise across an organization. It can map skills across a company and connect employees with the information they need.
It also means that teams can share their skills and learn new ones from one another with ease—something that’s increasingly important to the growing millennial and Gen Z workforce.
But for large businesses and enterprises, these benefits are even more acute. When you have an organization with tens or hundreds of thousands of employees, inefficiencies can snowball exponentially.
Having a (not uncommon) attrition rate of 20% can be the equivalent of thousands of employees changing every single year.
That adds up to tens of thousands of workdays lost every year on onboarding and training new joiners—each one of which can cost companies 200% of their yearly salary just to get up to speed and ready to work (not to mention a burnt-out HR department).
AI can also help new joiners rapidly share their skills and proactively find the knowledge they need while simultaneously better retaining the knowledge of those who leave a business in order to pass it on to future joiners or upskill current employees.
Identifying risks and working with AI effectively
AI used by knowledge workers in the workplace to share skills will not be the same as the AI used, for example, by the military or the police—and it shouldn’t be treated as such.
That doesn’t mean its roll-out doesn’t require a considered approach.
One of the best questions businesses can, and should ask, is how they will use AI? Will AI enhance your workforce, boost people’s ability to share skills, gain knowledge, problem solve and work together? If the answer is yes to these questions, then this technology will help people, not hinder them.
As policymakers and influencers comment and decide on how AI will function on a macro level, organizations have a responsibility to use the AI available to them ethically.
Electricity improved and transformed people’s lives in ways we could have never imagined. AI has the same opportunity but this comes with great responsibility. Just like any transformative technology, standards and regulations will be implemented to scale the benefits of AI.
It’s now down to businesses to take on the challenge and ensure their focus is on the benefits AI can offer to humans.
By Marc Vontobel, Founder and CTO at Starmind