Want to build responsible AI? Build the right, job-ready skills first

It was Mark Zuckerberg who first said that technology teams needed to “Move fast and break things” in order to innovate. But like many practices created in the noughties, this motto is no longer fit for purpose in the AI age. As we race towards ever more powerful and insightful AI, we cannot afford to break anything. Because there are real-world, scalable issues that AI can propagate if we do not develop it responsibly right from the start.

This view is shared by many, with a Microsoft executive recently quoted in an internal email (about generative AI) as saying that it would be an “absolutely fatal error in this moment to worry about things that can be fixed later.”

Negativity can rapidly spread

Because AI is increasingly prevalent in our society and workplace, any problems with its workings will rapidly scale and impact many different aspects of our lives. Problematic AI could amplify harmful biases and stereotypes, spread misinformation, cause greater inequity, and infringe individual rights to privacy. In the AI race, it’s vital that we plug any ‘ethical debts’ as and when they arise instead of putting it off to deal with later. And a large part of that effort will center on having the right skills, at the right level, across your workforce. 

Skills are foundational to responsible AI

Skills that enable greater trust in AI, that mitigates the risks of using it, and that ensures data is protected and used ethically – also known as AI TRISM skills – will be increasingly sought after by organizations. Indeed, a recent survey of IT professionals carried out by Skillable found that over half of IT leaders (51.4%) see AI TRiSM (AI trust, risk and security management) skills as essential to their immediate future success. 

AI TRISM is a framework that ensures organizations are using and developing AI in a reliable, fair, and ethical way, that respects privacy and has clear governance over its use. Some of the areas it covers includes reducing bias, explaining how an AI model comes to its insights, and protecting data. These are all table stakes for the long-term adoption of AI. Without this, the key stakeholders (including the public) won’t trust in AI and won’t hand over the data or consent needed to make it work. 

Developing AI skill masters

Such table stakes require the best skills. Those aren’t built through simply reading or hearing about a topic like AI security or governance. Although learning resources like blogs, books, podcasts, videos, and graphics play a role in building some understanding of AI TRISM, they don’t go far or deep enough to ensure true skill mastery. That’s what’s really needed in organizations innovating with AI – skill masters who deeply understand and can implement clear governance and security around the use of AI. Who can champion AI TRISM in every aspect of their role and share knowledge with others. 

Completion metrics and learning hours don’t tell us if a person has actually mastered a skill. True validation only comes from demonstrating and applying skills in the correct way. Otherise, you’re left with people that completed learning and felt ready, but couldn’t apply the skill in the moment of need. 

Increasing the pace of learning 

Organizations need AI skill masters who can learn quickly as the field is constantly changing. AI patents alone have increased 16-fold in a decade, going from 1974 patents awarded in 2010 to over 31,600 in 2020. This doesn’t account for the exponential rise of generative AI, nor advances on the horizon with improved chips, 5G/6G and quantum computing. 

That speed of learning comes through practice and application. Humans evolved to learn through applying their skills. Therefore, if you want your workforce to quickly upskill and reskill in AI skills, you need to give them opportunities to apply their theoretical knowledge on the job. Indeed, that’s what two-thirds (67%) of IT professionals say that they want, more hands-on application that stretches and builds their skills. 

Showcasing skills

Moreover, such hands-on learning opportunities give them a chance to showcase and validate their skills, to prove to their employer that they can perform a skill in the workplace. Given that 40% of survey respondents said that current learning technology doesn’t allow them to demonstrate their true skill proficiency, this is a much-overlooked area that can really benefit employees and their organizations.

For instance, an employee might want to show their employer that they can build a natural language processing (NLP) solution in Python. They can either take on a stretch assignment that requires them to complete this task to a specific level (validated by manager and peer feedback), they could learn this skill in their spare time and create an NLP as a side project outside of work, or they could complete a skills challenge that would score their work as they build the NLP within a simulated environment. 

Each hands-on learning opportunity offers a chance to apply the skill of building an NLP model, but the last one, the skills challenge, is what truly validates someone’s learning. It is scored based on clearly defined parameters, with no potential manager or peer bias. It can be added to someone’s learning or skills profile as a credential that shows they’ve met a certain skill level. Plus, it’s easily scalable across a workforce, no matter their location or outside work commitments because the challenge is virtual and can be accessed at a time and place of the employee’s choosing. 

Everyone needs baseline AI skills

That scalability is vital, as we cannot prepare future workforces for AI by limiting hands-on opportunities to just a select few. Indeed, a third of employees lack even foundational digital skills and that will significantly undermine any efforts to implement and use AI responsibly. If a major part of your workforce don’t understand how AI works, how can they effectively oversee and govern it? 

Ready for the AI era

Hands-on skill challenges ensure that everyone is job-ready, because they help people apply and demonstrate their AI skills in a safe environment that’s as close to a real-world project as possible. Set scenarios are created, such as a simulated data breach, and people are guided through it to understand how they need to perform a skill on the job. This shows that the individual can apply their new skills, and also work under pressure. It also gives them the confidence that they are ready for a new task or role. 
 

As AI transforms life as we recognize it, it’s vital that the people developing and working alongside it are equipped with the right skills to ensure it benefits society and helps us all become better. That only comes with effective upskilling and reskilling that doesn’t just tell someone how to do a skill but shows them through practice and application. 

The post Want to build responsible AI? Build the right, job-ready skills first appeared first on Datafloq.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter