Home Artificial Intelligence Tackling AI dangers: Your fame is at stake

Tackling AI dangers: Your fame is at stake

0
Tackling AI dangers: Your fame is at stake

[ad_1]

Threat is all about context

Threat is all about context. In truth, one of many greatest dangers is failing to acknowledge or perceive your context: That’s why you’ll want to start there when evaluating danger.

That is notably vital when it comes to fame. Assume, as an illustration, about your clients and their expectations. How may they really feel about interacting with an AI chatbot? How damaging may it’s to supply them with false or deceptive info? Perhaps minor buyer inconvenience is one thing you possibly can deal with, however what if it has a big well being or monetary influence?

Even when implementing AI appears to make sense, there are clearly some downstream fame dangers that should be thought-about. We’ve spent years speaking concerning the significance of person expertise and being customer-focused: Whereas AI may assist us right here, it may additionally undermine these issues as nicely.

There’s an analogous query to be requested about your groups. AI might have the capability to drive effectivity and make individuals’s work simpler, however used within the improper approach it may severely disrupt present methods of working. The business is speaking lots about developer expertise not too long ago—it’s one thing I wrote about for this publication—and the choices organizations make about AI want to enhance the experiences of groups, not undermine them.

Within the newest version of the Thoughtworks Expertise Radar—a biannual snapshot of the software program business primarily based on our experiences working with purchasers world wide—we speak about exactly this level. We name out AI staff assistants as one of the crucial thrilling rising areas in software program engineering, however we additionally word that the main target must be on enabling groups, not people. “You ought to be in search of methods to create AI staff assistants to assist create the ‘10x staff,’ versus a bunch of siloed AI-assisted 10x engineers,” we are saying within the newest report.

Failing to heed the working context of your groups may trigger important reputational injury. Some bullish organizations may see this as half and parcel of innovation—it’s not. It’s exhibiting potential staff—notably extremely technical ones—that you just don’t actually perceive or care concerning the work they do.

Tackling danger by smarter know-how implementation

There are many instruments that can be utilized to assist handle danger. Thoughtworks helped put collectively the Accountable Expertise Playbook, a set of instruments and strategies that organizations can use to make extra accountable selections about know-how (not simply AI).

Nevertheless, it’s vital to notice that managing dangers—notably these round fame—requires actual consideration to the specifics of know-how implementation. This was notably clear in work we did with an assortment of Indian civil society organizations, creating a social welfare chatbot that residents can work together with of their native languages. The dangers right here weren’t not like these mentioned earlier: The context through which the chatbot was getting used (as help for accessing very important companies) meant that incorrect or “hallucinated” info may cease individuals from getting the assets they depend upon.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here