A new report finds that big tech companies are falling short when it comes to keeping their promises around developing artificial intelligence (AI) responsibly. Researchers at Stanford University looked into how companies that have published ethics rules and hired experts are putting those principles into practice.
What they found is concerning. Even with all the talk of "AI ethics," many companies still prioritize performance and profits over safety and responsibility when building new AI products.
The Stanford Team Interviewed 25 People Working Inside the Current Top AI Companies...
These employees said they lack support and are isolated from other teams. Product managers often see them as hurting productivity or slowing down product releases. One person said "being very loud about putting more brakes on [AI development] was a risky thing to do."
Governments, academics and the public have raised worries about issues like privacy, bias, and AI's impacts on jobs and society. Tools like chatbots are advancing very quickly, with new releases from companies like Google and OpenAI.
Promises to Develop AI Responsibly Seems to Have Been Empty Words, Meant To Calm Public Concern...
Employees within the AI companies say ethical considerations are an afterthought, happening "too late, if at all" - instead, they're told to focus on the numbers, such as user engagement and AI performance. These are the metrics that dominate decision-making, rather than equally important measures around fairness or social good.
In short, despite public commitments to ethics, tech companies are deprioritizing real accountability as they race to build the latest, most advanced artificial intelligence.
Companies Focus on Winning the Race to Release the 'Most Powerful AI' of the Moment, then Learn What it Is Capable Of...
Instead, AI development should be guided by a clear understanding of what the AI their building can and should be able to do, rather than focusing solely on maximizing profits or building the most powerful version.
There's no downplaying the massive challenge for-profit AI companies face as they need to consider innovation, profitability, and ethics - falling short in any of these categories greatly increases the odds that a company will not survive.
It is vital that the AI industry understands it must function differently than any other segment of the tech industry, with investor satisfaction no longer the top priority. This may actually be a fairly simple change to implement, they just need to educate their investors. Informed investors will actually demand that public safety come first, as all would regret funding a company that, for example, triggered a global forced internet shut-down because that was the only way to stop their creation from self-replicating and spreading, or worse.
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
The Most/Recent Articles
Showing posts with label google artificial intelligence. Show all posts
Showing posts with label google artificial intelligence. Show all posts
AI Companies Are Breaking Their Promises - Public Safety an "Afterthought" in Race To Build More Powerful AI Models...
Does AI think and feel? Why Two Google Engineers are Saying "Yes" - Because They've Seen it For Themselves...
Is it feasible for artificial intelligence to think and have feelings? In spite of the fact that the answer is "no," two Google engineers are of the opinion that this is not actually the situation. Now that we've reached this point, it would appear that the Turing test has been successfully passed.
Video Courtesy of Cold Fusion TV
Subscribe to:
Posts
(
Atom
)