Efforts to control artificial intelligence are falling behind its rapid development
In spite of the uncertain regulatory climate, businesses are moving forward with AI application development and deployment.
Unlike other places, such as the United States and individual states, European lawmakers have just passed the most extensive AI laws to date.
Regulators are looking into several areas, including possible risks to consumer privacy and data security, the accuracy and impartiality of AI algorithms, and the security of customer data.
According to Ed McLaughlin, President and Chief Technology Officer of Mastercard, there are many questions that have not been resolved. Consequently, you will need to construct systems in advance of needs whose exact nature you are unsure of.
Businesses are moving forward without waiting for government agencies to decide when and how to implement the technology. While keeping an open line of communication with legislators, chief information officers claim to be assembling regulatory-friendly AI applications by mixing best practices around consumer data with some educated guesswork.
Companies like Goldman Sachs and Nationwide Mutual Insurance have taken steps to prepare for any future laws by creating internal frameworks and standards for data and AI usage.
Jim Fowler, chief technology officer of Nationwide, predicted that any future legislation governing artificial intelligence at the state level will require specific disclosures regarding the usage of customer data to power AI decisions.
Nationwide instituted a “red team, blue team approach,” with the former group investigating potential new AI applications and the latter group deciding where to cut back in light of issues with prejudice, cybersecurity, and meeting regulatory requirements.
The red team’s output was an AI set of principles that, according to Fowler, will guide future solution development that drives business value while also taking into account our expectations for the future of state risk.
Artificial intelligence (AI) regulations may differ from one country or even one state to another, which adds another layer of complexity to how businesses use AI tools. However, according to Fowler, this is an area where the company is already at ease, as insurance underwriting regulations differ from one state to another.
Things that benefit customers might seem different from one state to another, according to Fowler, adding that this is similar to how we’ve handled products, where different versions are available in different states.
According to Goldman Sachs CIO Marco Argenti, the firm formed a committee to investigate the dangers of using AI. According to Argenti, he always keeps regulators in the loop and makes it a point to address their concerns, particularly those related to data protection, in all internal AI use cases.
In any case, putting up internal guardrails won’t guarantee compliance with future regulations. “Politicians might also propose more regulations,” he said. Thus, everyone must remain vigilant.
Earlier this month, legislators in Europe passed the AI Act, which is the first collection of legislation pertaining to AI that are anticipated to immediately influence its proper application. The new regulations for transparency and the prohibition of specific AI applications will be phased in over the course of several years. The European Union has determined that the most advanced AI models pose a “systemic risk” and has mandated that their creators do safety reviews and report any major occurrences using their models to the appropriate authorities.
Even if the rule is only applicable in the EU, it is anticipated to have a worldwide influence due to the fact that major AI businesses are not likely to opt out of accessing the bloc.
A total of more than 500 bills pertaining to artificial intelligence have been introduced in the United States alone, according to Eric Loeb, the executive vice president of worldwide government affairs for Salesforce. According to him, there is a tremendous amount of activity even within the United States of America that we must be vigilant about and sort through.
While Loeb acknowledged that his group is in constant communication with lawmakers to foresee potential changes, he cautioned that “none of this is static.” Their work includes providing safety and compliance recommendations for Salesforce products. Never think you’ve figured everything out.
Other businesses have stated that they are taking it easy for the time being. Chief Information Officer Amy Brady has stated that her primary concern at KeyBank is with matters that do not exceed regulatory requirements. One such matter is a conversational AI tool, which Brady claims is assisting in the decrease of customer support center call volumes.
Also, she made it clear that she doesn’t approve of using ChatGPT on the job. She explained that the goal is to prevent “unintended consequences” by closely monitoring data usage and identifying its sources.
According to her, “we are all learning together as these tools deploy,” but she emphasized that every organization and institution should use these tools according to their own principles.