The Most/Recent Articles

Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

United Nations Warns About the Potential for Inequity with AI by Not Having an All-encompassing Global Strategy...

UN on Artificial Intelligence

In little more than a decade, AI has emerged as one of the most disruptive technologies of our times-machines can now learn from big data, and thereby enable computers to perform tasks that hitherto have been the exclusive domain of humans. From furthering scientific research to solving sustainability issues worldwide, the possibilities seem endless with AI. However, the UN is sounding the alarm over the risk this technology will pose, especially in potentially creating a wide disparity gap between various parts of the world if left unmanaged.

A Double-Edged Sword

The UN speaks to the tremendous promise of AI in its recent report entitled Governing AI for Humanity. It points out that AI is actually enabling progress in scientific discovery, medicine, and sustainability much faster than ever imagined. Basically, the AI-driven systems do help analyze climate data, enhance resource management, and help achieve the UN Sustainable Development Goals.

In this way, AI is positioned as a driving component in solving some of the current world problems, from eradicating poverty to mitigating climate change.

The UN, however, warns that minus a global strategy that oversees and regulates AI, the technology could exacerbate inequalities. Wealthy nations and corporations that have resources to develop and deploy AI on a large scale will have the monopoly of benefits accruing from the technology, while underdeveloped regions become left behind, unable to access or employ AI tools that could better their economies, health, and educational sectors.

Disinformation, Automated Weapons, and Climate Risks

The UN is also not much concerned with it, seeing how AI will cause an economic imbalance. Besides that, there's a different warning of the dark side of its capabilities, with the possibility that it could spread disinformation, fuel conflict, and exacerbate climate change.

Unfortunately, AI can be a game-changing device in manipulating algorithms to generate and amplify disinformation on social media, directly swaying public opinions and undermining societies. Considering the already alarming prevalence of fake news and misinformation online, this prospect is definitely distressing. If it fell into the wrong hands, this might be used to disrupt democratic processes and human trust within a society, leading to further fragmentation.

Another alarming aspect of AI development is that, within the process of automating weapons, it emerges as a global security concern. Employing autonomous flying drones and other AI-powered weapons systems primarily opens up ethical concerns about warfare, but also the possibility of unintended consequences arising in conflict zones.

Moreover, large-scale AI systems demand great energy for operation and in turn pose an imminent environmental threat. Their training requires huge computational resources that most of the time lead to increased carbon emissions. Now, if not managed properly, the tool expected to fight against climate change would be further accelerating the process-a Catch-22 problem that the global community needs to solve extremely urgently.

The Imperative for a Global AI Strategy

With full realization that AI is capable of solving and creating problems, the UN, therefore, entreats international cooperation in the governance of the making and deployment of AI technologies. What is being demanded is a holistic global approach to ensure benefits accruing from AI are shared equitably and the risks mitigated. This ranges from setting ethical standards and developing transparent regulatory frameworks to fostering cooperation among nations to bridge the digital divide. There is little question that AI holds much promise, but all things considered, its perils are real. In the assessment from the UN, only through a joined approach globally can humans make certain that AI proves to be a tool for placements of progress for one and for all, rather than a driver of inequality.


-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Video: As Robotics and AI Merge, a Huge Competition for Progress, and Funding, is Developing...


In This Video: Major technology and robotics companies are advancing their efforts to develop and train artificial intelligence models. These AI systems are being designed to perform a wide range of roles and functions, preparing for a future where AI is integral to various industries and aspects of everyday life.

Video Courtesy of NBC News

AI Companies Are Breaking Their Promises - Public Safety an "Afterthought" in Race To Build More Powerful AI Models...

A new report finds that big tech companies are falling short when it comes to keeping their promises around developing artificial intelligence (AI) responsibly. Researchers at Stanford University looked into how companies that have published ethics rules and hired experts are putting those principles into practice.

What they found is concerning. Even with all the talk of "AI ethics," many companies still prioritize performance and profits over safety and responsibility when building new AI products.

The Stanford Team Interviewed 25 People Working Inside the Current Top AI Companies...

These employees said they lack support and are isolated from other teams. Product managers often see them as hurting productivity or slowing down product releases. One person said "being very loud about putting more brakes on [AI development] was a risky thing to do."

Governments, academics and the public have raised worries about issues like privacy, bias, and AI's impacts on jobs and society. Tools like chatbots are advancing very quickly, with new releases from companies like Google and OpenAI.

Promises to Develop AI Responsibly Seems to Have Been Empty Words, Meant To Calm Public Concern...

Employees within the AI companies say ethical considerations are an afterthought, happening "too late, if at all" - instead, they're told to focus on the numbers, such as user engagement and AI performance. These are the metrics that dominate decision-making, rather than equally important measures around fairness or social good.

In short, despite public commitments to ethics, tech companies are deprioritizing real accountability as they race to build the latest, most advanced artificial intelligence.

Companies Focus on Winning the Race to Release the 'Most Powerful AI' of the Moment, then Learn What it Is Capable Of...

Instead, AI development should be guided by a clear understanding of what the AI their building can and should be able to do, rather than focusing solely on maximizing profits or building the most powerful version. 

There's no downplaying the massive challenge for-profit AI companies face as they need to consider innovation, profitability, and ethics - falling short in any of these categories greatly increases the odds that a company will not survive.

It is vital that the AI industry understands it must function differently than any other segment of the tech industry, with investor satisfaction no longer the top priority. This may actually be a fairly simple change to implement, they just need to educate their investors.  Informed investors will actually demand that public safety come first, as all would regret funding a company that, for example, triggered a global forced internet shut-down because that was the only way to stop their creation from self-replicating and spreading, or worse.

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom


AI Continues To Advance At Rapid Pace - The Top Stories from the World of AI...

AI news

The world of Artificial Intelligence (AI) is ever-evolving, and this week has been particularly groundbreaking, especially in the realm of AI art. From new features in image generation platforms to legal battles over AI-generated art, there's a lot to unpack. Here's a comprehensive update on what you need to know.

Mid-Journey's In-Painting Feature

Mid-Journey, a prominent player in the AI art space, recently rolled out its in-painting feature. This feature allows users to selectively modify specific regions of an image. For instance, you can change a character's hairstyle or clothing by simply selecting the area and inputting a prompt. The feature has been praised for its ability to produce higher quality and more detailed images when the entire image is selected with the same prompt.

What's Coming Next? According to David, the founder of Mid-Journey, the company is focusing on enhancing the in-painting features and is also prioritizing the development of version 6. This new version aims to offer more control, better text understanding, and improved resolution. However, there's no estimated release date yet.

Ideogram AI: Text to Image Revolution

Ideogram AI, developed by a team from Google Brain, UC Berkeley, CMU, and the University of Toronto, has introduced a standout feature: adding text to AI-generated images. The platform allows users to generate images based on text prompts, offering a level of quality and detail that surpasses other platforms.

Actually, we used it for this article's header image!

Leonardo AI's Anime Pastel Dream

Leonardo AI has added a new model called Anime Pastel Dream, which allows users to generate anime-style images. The model is accessible through the Leonardo app and has been praised for the quality of images it produces.

Legal Challenges in AI Art

A U.S. federal judge recently ruled that AI-generated art cannot be copyrighted if it is produced without human intervention. This decision has sparked debates and discussions about the nuances of copyright laws concerning AI-generated art.

AI in Marketing: A Partnership with HubSpot

In collaboration with HubSpot, we're offering a free report on how AI is revolutionizing marketing. The report, "AI Trends for Marketers in 2023," provides insights into how AI tools are being used to create content faster, analyze data instantly, and increase ROI.

YouTube and AI in Music

YouTube has announced a partnership with Universal Music Group to explore the ethical and responsible use of AI in the music industry. They aim to ensure fair compensation for artists and record labels.

YouTube is also testing a new feature that allows users to hum a song to search for it. Built on a machine-learning model, this feature can identify a song based on its "fingerprint" or signature melody.

Advances in Healthcare

Microsoft and Epic are collaborating to use generative AI in healthcare. They aim to improve clinician productivity, reduce manual labor-intensive processes, and advance medicine for better patient outcomes. AI is also helping paralyzed individuals communicate through brain implants, marking a significant advancement in healthcare technology.

Conclusion

AI is not just a technological marvel; it's a tool that's shaping various industries, from art and marketing to healthcare. Despite some legal and ethical challenges, the future of AI looks promising. Companies are investing heavily in AI, and it's clear that we're just scratching the surface of its potential.


-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

'Godfather of AI' on AI's Potential Risk To Society...

Geoffrey Hinton is one of the leading voices in the field of AI,  he quit his job at Google over concerns about what AI could eventually lead to if unchecked.

Video courtesy of PBS Newshour

19 of The World's Largest Tech Companies ORDERED to REVEAL Algorithms Behind Their Latest AI Developments....

ai in europe

The European Commission is making 19 tech giants, including Amazon, Google, TikTok, and YouTube, reveal their AI algorithms under the Digital Services Act. This is a significant step towards making AI more transparent and accountable, and ultimately, improving our lives.

As we know, AI is expected to impact every aspect of our lives, from healthcare to education, to even how well we write. However, it also generates fear, such as concerns about machines becoming smarter than us or causing harm inadvertently. To avoid these risks, transparency and accountability will be crucial for AI to benefit us positively.

The EU Artificial Intelligence Act aims to achieve this goal. By sharing commercial information with regulators before using AI for sensitive practices such as hiring, companies can be held accountable for the outcomes of their algorithms. EU rules could quickly become the global standard, making this a significant development in AI regulation.

However, there's always a balance to strike when it comes to regulation. The major tech companies view AI as the next big thing, and innovation in this field is now a geopolitical race. Too much regulation could stifle progress, but at the same time, we need to make sure that companies are accountable for their algorithms' outcomes.

Companies will also need to answer any questions the commission members have about their AI projects.

\This is a significant development for AI regulation that will benefit everyone. By making AI more transparent and accountable, we can ensure that it improves our lives and avoids the potential risks.

Will They Even Have The Answers?

Interestingly, AI researchers are increasingly devoting time to understanding what AI is doing. Sometimes they can dig into the data and identify particular parameters on which the AI relies heavily. However, explaining why AI did or said something can be like explaining a magic trick without knowing the secret. 

This may be the most alarming revelation from these hearings – the creators don't always understand their creations.

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Popular Misconceptions about ChatGPT, and the Truth About Them...

ChatGPT

As ChatGPT gains popularity, misconceptions about this AI-powered conversational agent abound. As a reporter, it's important to set the record straight on some of the most common misunderstandings surrounding this tool. Let's take a closer look at three of them.

-  Firstly, many people assume that ChatGPT is a human-like chatbot. While it's true that ChatGPT simulates human-like conversation, it's important to clarify that this is an artificial intelligence system, not a real person. Its responses are based on patterns learned from vast amounts of text data, and it doesn't possess emotions or consciousness like a human would.

-  Secondly, there's a belief that ChatGPT always provides accurate and reliable responses. However, this couldn't be further from the truth. Although ChatGPT is impressive at generating responses, it's a machine learning model, meaning its responses can be influenced by the quality and bias of the data it was trained on. Additionally, there may be cases where it generates inappropriate or offensive responses.

-  Lastly, some people think that ChatGPT is a problem-solving superhero, capable of solving any issue thrown its way. While ChatGPT is an incredibly useful tool, it's not a replacement for human expertise, creativity, and intuition. There are certain types of problems that require human intelligence to solve, and ChatGPT may not be able to provide meaningful solutions in these cases.

In conclusion, ChatGPT is a powerful tool for generating responses and providing insights. However, it's essential to understand its limitations. By having a realistic understanding of ChatGPT's capabilities, users can take full advantage of its strengths while avoiding potential pitfalls. As a reporter, it's crucial to separate fact from fiction and ensure that readers have the most accurate information available.

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom