The Most/Recent Articles

Showing posts with label tech news. Show all posts
Showing posts with label tech news. Show all posts

Why is Mark Zuckerberg's Hawaii Home a Fully Functional DOOMSDAY BUNKER?!

 

Zuckerberg's hawaii home

Mark Zuckerberg is constructing a $397 million compound in Hawaii that seems to be more than a family home. With blast-proof doors, an enormous underground bunker, and fortress-like security, the extravagant compound seems seems to be ready for a lot more than vacations.

There’s nothing subtle about Silicon Valley billionaire Zuckerberg’s efforts to transform over 500 hectares of pristine Kauai coastline into a secluded doomsday retreat for his family.

Critics contend Zuckerberg has overridden ancient indigenous property rights and indulged an extreme passion for privacy that seems hypocritical coming from someone who earned their fortune monetizing people’s personal information.  

The sprawling compound will include two large mansions, over a dozen guesthouses, and more than 30 bedrooms and bathrooms.

The two main mansions have a combined floor area approaching the size of an American football field.

Other highlights are a full gym, tennis court, pools, and an elaborate underground bunker with living space, a mechanical room, and an emergency escape hatch. The bunker door is built to military-grade anti-blast specifications. 

Security is unparalleled, with a vast camera network and keypad-operated interior doors. Some doors are even disguised as walls. The compound will be entirely self-sufficient with its own water storage, solar power, and agricultural production.

But Zuckerberg’s intense secrecy around the project has angered locals. Workers must sign strict non-disclosure agreements, a long rock wall blocks coastal views, and guards monitor the beach below. Some feel the once-pristine shore now resembles a “prison camp.”

While other celebrities own discreet Kauai homes, Zuckerberg’s compound is viewed as an invasive colonization of sacred land where private property rights didn't exist until the mid-1800s. 

After buying the initial 283-hectare estate, Zuckerberg sued hundreds of native Hawaiians who potentially had ancestral claims on sections of the land. He insisted this was to pay them fairly, but locals saw it as highly confrontational.

Though Zuckerberg later dropped the lawsuits, he's still accused of improperly pressuring land sales. His donations to local causes are seen as buying goodwill after Facebook’s Cambridge Analytica privacy scandal.

For tech billionaires like Elon Musk and Larry Ellison, the isolation of Hawaii makes it the ultimate apocalypse hideaway. That’s likely why Zuckerberg wants to “plant roots” there. But his extreme compound has ruined the island paradise that initially drew him in.

-----------
Author: Alex Benningram
Tech News CITY /New York Newsroom

A Hands on Look at Apple Watch 10!


Hard to believe but Apple watch now has 10 versions since it was created - so what did Apple change and improve for the big milestone? A few things, some good, some probably unnecessary...

Video Courtesy of MacRumors

THE EDGE: Stem Cells from Foreskin of Circumcised Baby Penis' used to Grow 'Mini-Brains' able to Process Data and Run AI, Faster - While Consuming Almost NO ENERGY...

The Edge By Ross Davis

Welcome to THE EDGE, a series that focuses on the edge of technology. This series will not be about obscure, small advances in technology—our goal is to be where you will first hear about something that everyone will be talking about in the next few years. This is where you'll discover tech with the ability to transform and shape our future, and follow its development from its earliest phases.

Personally, as someone who's been interested in the evolution of technology for as long as I can remember, I feel the next wave of technology comes with a risk factor we haven't seen before.

AI brings with it the disruption of human-to-human interaction, whether it be professionally, as it replaces co-workers or entire departments in the workforce, or in the still little-explored but very real issue of AI replacing intimate personal relationships. Virtual reality opens the doors to everything from entertainment reaching new levels of immersion to face-to-face interactions, meetings, and even family gatherings, regardless of the miles separating people. But it also comes with the very real risk of people choosing virtual worlds over real life, replacing themselves with an avatar based on their idealized version of themselves. It can feel real enough that it's easy to ignore the fact that the only interactions in this virtual world are between façades.

The source of the most dangerous technology has remained the same for most of human history, probably because it has always had the largest budget—and that is military technology. Even here, advances introduce entirely new levels of danger, where the atom bomb has become just one of several weapons that could end mankind.

But I chose the topic for this first issue for several reasons: the ability of this technology to have major implications on all the other technologies I've mentioned, the pace at which it is progressing, and the very real concerns that come with it. However, my primary reason for choosing this topic is that while people are familiar with the general concept, outside of the labs where its development is taking place, people seem completely unaware of just how deep scientists are into this uncharted territory.

I’m talking about where life meets machine—the point where the lines between the biological and the artificial are not just blurred, they’re rapidly becoming entirely erased.

Human Stem Cells Used to Create Lab-Grown Brain Tissue, Now Being Used for Organoid Intelligence (OI)...

The human brain can rapidly process massive amounts of data, and brain cells also require less energy to do the work. It turns out when these cells are given data, they return data as well.  This has scientists now working towards a new kind of computer processor, one powered by the neurons we have in our brains instead of silicon. 

Lab-Grown Brains:
The stem cells can come from a number of sources, one popular source for researchers are the previously-discarded foreskin from circumcised infants. These cells can can basically be re-programmed to become stem cells.  Stem cells can be thought of as a human cell that hasn't yet been told what it should become, it can turn into tissue, skin, organs, bone - this is when OI researchers are able guide these cells to become neurons, by putting them in a culture dish with already formed neurons, the neurons secrete molecules that (for lack of better words) tricks the stem cell in to thinking it is supposed to help grow a human brain. Done repeatedly, and eventually you have a lab grown brain.

This image shows a lab-grown mini brain created by Cortical Labs, which learned to play the video game 'pong'. 

Organoid Intelligence represents a new frontier, utilizing lab-grown brain organoids in order to accomplish some sort of computational task. While artificial intelligence today depends on silicon chips and machine-learning algorithms, OI would exploit inherent biological neuron capabilities for information processing and storage. The idea is that self-organizing, adaptive brain-like structures might be far more powerful and flexible for computer systems today, even opening new routes for information processing and cognitive studies.

Controversy
The largest concern with Organoid Intelligence is the eventuality of such brain organoids becoming conscious or sentient. Currently, scientists agree that so far, the  organoids used in research are too simple to become conscious. 

However, as these systems become more complex, and lab grown 'mini-brains' are no longer 'mini' - these artificially constructed brains could develop the the capacity to feel or, worse, think. 

This Tech is Classified as 'Wetware'...

Wetware is our human 'hardware' - the parts of human biology able to process data, such as the brain and central nervous system.  Wetware, in more advanced contexts, embraces also engineered or synthetic biological constructs merging the natural, real capabilities of the brain with advanced technology.

Controversy
Wetware falls into that gray area between biology and technology, begging questions of autonomy, human identity, and the ethics involved in manipulating living systems for technological ends. Some critics see it as the commodification of life or reducing human beings to mere components of larger machine-driven systems. The most heated debates will continue to revolve around privacy and control—are these systems vulnerable to being hacked or manipulated? What might happen when biological and digital components have become so intertwined as to be inseparable?

What's Next?

Wetware systems are here, and just weeks ago, became accessible to people outside of research labs when a Swiss company announced the world's first 'neuroplatform' where a 4-Organoid biocomputer can be accessed for a $500/month fee. 

Now that it's here, the future of wetware is about increasing its capabilities, by growing larger networks of lab-grown neutrons. The successive stages of development into brain-computer interfaces will be advanced neuroprosthetics, neural interfaces that WOULD enable real-time interaction between the activity of a brain and machine learning algorithms. Among other things, this might have serious implications from both the medical and enhancement ends, pushing us toward a world where thoughts can directly control computers—and conceivably vice versa.

Neural Dust: Tiny Sensors Inside Your Brain

Neural Dust is formed from micro-sized, wirelessly powered sensors that can be implanted in the human body and, more particularly, inside the brain, for the perception and manipulation of neural activity. These speck-of-dust-sized particles operate on the power provided by external ultrasound to send information about real-time brain activity to the outside world. Some of the other potential uses of Neural Dust include the treatment of various neurological disorders, such as epilepsy, and deeper brain-computer interfacing that could enable people to control machines with their minds.

Controversy
The very notion of dust-sized sensors nestled inside our bodies or brains conjures up certain immediate implications of privacy and self-governance. That such sensors may, in fact, be capable of monitoring brain activity and maybe even altering it opens up a Pandora's box of ethical considerations: who does the information belong to, and what are the safeguards against misuse or surveillance? In theory, Neural Dust might allow for highly invasive monitoring; it could be a tool for government overreach or corporate exploitation. There are also medical risks since the long-term effects of having foreign objects implanted in the body—let alone the brain—are not yet fully understood.

What's Next?
But despite these cautions, researchers forge ahead. The next generation of Neural Dust sensors will be even smaller and could, in theory, be used to monitor individual neurons and directly respond with artificial intelligence. Applications could range from advanced medical treatments to human augmentation and more: the direct integration of the brain with the machinery around it. At every single step, there will be the need to discuss exhaustively privacy, consent, safety, and security.

In Closing...

Organoid Intelligence, Wetware, and Neural Dust are only the very cutting edges of technologies where biology and computing meet. While such technologies have the power to disrupt businesses, from medicine to artificial intelligence, by providing never-before-imagined capabilities, they also face us with a completely new world of ethical dilemmas and beg questions about our stance toward technology, biology, and our perceived sense of self.

The crazy thing is, this still barely scratches the surface. But the goal isn’t to cover everything there is to know, but instead to cover what you need to know to stay informed, aware, and ready for how emerging new technologies can shape our world - and our lives.

Stay tuned for more as we continue to explore the edge of innovation.

------------------------
Author: Ross Davis
Silicon Valley Newsroom

United Nations Warns About the Potential for Inequity with AI by Not Having an All-encompassing Global Strategy...

UN on Artificial Intelligence

In little more than a decade, AI has emerged as one of the most disruptive technologies of our times-machines can now learn from big data, and thereby enable computers to perform tasks that hitherto have been the exclusive domain of humans. From furthering scientific research to solving sustainability issues worldwide, the possibilities seem endless with AI. However, the UN is sounding the alarm over the risk this technology will pose, especially in potentially creating a wide disparity gap between various parts of the world if left unmanaged.

A Double-Edged Sword

The UN speaks to the tremendous promise of AI in its recent report entitled Governing AI for Humanity. It points out that AI is actually enabling progress in scientific discovery, medicine, and sustainability much faster than ever imagined. Basically, the AI-driven systems do help analyze climate data, enhance resource management, and help achieve the UN Sustainable Development Goals.

In this way, AI is positioned as a driving component in solving some of the current world problems, from eradicating poverty to mitigating climate change.

The UN, however, warns that minus a global strategy that oversees and regulates AI, the technology could exacerbate inequalities. Wealthy nations and corporations that have resources to develop and deploy AI on a large scale will have the monopoly of benefits accruing from the technology, while underdeveloped regions become left behind, unable to access or employ AI tools that could better their economies, health, and educational sectors.

Disinformation, Automated Weapons, and Climate Risks

The UN is also not much concerned with it, seeing how AI will cause an economic imbalance. Besides that, there's a different warning of the dark side of its capabilities, with the possibility that it could spread disinformation, fuel conflict, and exacerbate climate change.

Unfortunately, AI can be a game-changing device in manipulating algorithms to generate and amplify disinformation on social media, directly swaying public opinions and undermining societies. Considering the already alarming prevalence of fake news and misinformation online, this prospect is definitely distressing. If it fell into the wrong hands, this might be used to disrupt democratic processes and human trust within a society, leading to further fragmentation.

Another alarming aspect of AI development is that, within the process of automating weapons, it emerges as a global security concern. Employing autonomous flying drones and other AI-powered weapons systems primarily opens up ethical concerns about warfare, but also the possibility of unintended consequences arising in conflict zones.

Moreover, large-scale AI systems demand great energy for operation and in turn pose an imminent environmental threat. Their training requires huge computational resources that most of the time lead to increased carbon emissions. Now, if not managed properly, the tool expected to fight against climate change would be further accelerating the process-a Catch-22 problem that the global community needs to solve extremely urgently.

The Imperative for a Global AI Strategy

With full realization that AI is capable of solving and creating problems, the UN, therefore, entreats international cooperation in the governance of the making and deployment of AI technologies. What is being demanded is a holistic global approach to ensure benefits accruing from AI are shared equitably and the risks mitigated. This ranges from setting ethical standards and developing transparent regulatory frameworks to fostering cooperation among nations to bridge the digital divide. There is little question that AI holds much promise, but all things considered, its perils are real. In the assessment from the UN, only through a joined approach globally can humans make certain that AI proves to be a tool for placements of progress for one and for all, rather than a driver of inequality.


-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Google Docs Faces TWO New Challengers, as both Proton and Zoom Launch Competing, Free-To-Use Alternatives...

Google Docs rivals

Google Docs' long-standing dominance in the collaborative document editing space is facing new challenges. Over the past two months, two major players have entered the arena, potentially reshaping the landscape of online document creation and editing.

Last month, Proton, best known for its privacy-focused email and VPN services, unveiled its 'Docs' feature.

This new offering places a strong emphasis on user privacy, boasting end-to-end encryption for all aspects of document editing. According to Proton, "Docs in Proton Drive are built on the same privacy and security principles as all our services... Best of all, it's all private — even keystrokes and cursor movements are encrypted." This level of security could be a game-changer for privacy-conscious users and organizations.

Following closely on Proton's heels, Zoom has now launched its own document editing solution.

This move appears to be part of Zoom's strategy to expand beyond video conferencing and create a more comprehensive workspace platform. The company is leveraging its existing user base, arguing that keeping all work-related activities within a single ecosystem can boost productivity. Zoom claims that users can save up to two hours per week by "limiting workflow distractions," presumably by reducing the need to switch between different applications.

Zoom's offering comes with a tiered pricing model. Free account holders can access basic features but are limited to sharing up to 10 documents simultaneously. Paid plans, starting at $14.99 per month, remove this limitation and include access to an AI writing assistant, adding an extra layer of functionality that could appeal to power users.

The key question now is whether these new entrants can make a significant dent in Google Docs' user base. While there hasn't been any major controversy surrounding Google Docs to drive users away, there is a growing trend of individuals and businesses seeking alternatives to Google's ecosystem. This sentiment extends beyond just document editing, with Google facing increased competition in its core search business as well.

However, it remains to be seen whether the market segment looking for Google alternatives is substantial enough to propel these new competitors to success. The coming months will be crucial in determining whether Proton and Zoom can carve out significant market share in this space.

As these new platforms evolve and user adoption patterns emerge, we'll be keeping a close eye on how this competition unfolds. It's an exciting time in the world of collaborative document editing, and the implications could extend far beyond just how we create and edit documents online.

____
Author: Stephen Hannan
New York Newsroom

iPhone 16 Details Leak: Are MAJOR Upgrades Coming to the New Model?


Some details on the iPhone 16 have leaked, and they have many calling this the biggest upgrade Apple has made to the iPhone in years - so, what's new?

OpenAI Finds itself in Multiple Controversies... Again. Plus Other Big AI News This Week....

 

AI News This Week

The world of artificial intelligence continues to evolve rapidly, with new developments emerging across various domains. Here's a roundup of the latest AI news:

AI Image Generation Reaches New Heights

Recent advancements in AI-generated images have been remarkable, with models like Flux producing incredibly realistic human portraits. While some minor imperfections remain, such as gibberish text on lanyards or slightly off microphone renderings, the overall quality is becoming increasingly difficult to distinguish from real photographs.

OpenAI Continues to Find itself in Controversies, and other OpenAI News...

Open AI, one of the leading AI research companies, has been at the center of several developments and controversies:

Cryptic "Strawberry" References: CEO Sam Altman posted an image of strawberries, fueling speculation about a rumored advanced AI model codenamed "Strawberry."

Leadership Changes: Co-founder John Sheu left to join Anthropic, while Greg Brockman announced an extended sabbatical.

New Board Member: Ziko Kolter, an AI safety and alignment expert, joined Open AI's board.

GPT-4.0 System Card: Open AI released a detailed report on safety measures for GPT-4.0.

Emotional Attachment Warning: The company cautioned about potential user emotional reliance on AI voice modes.

Structured Outputs API: A new feature for developers was introduced to improve data handling.

AI Text Detection Tool: Open AI developed but chose not to release a tool for identifying AI-generated text.

Legal Challenges: Elon Musk filed a new lawsuit against Open AI, while a YouTuber initiated a class action suit over alleged copyright infringement.

Anthropic's Bug Bounty Program

Anthropic launched a bug bounty program offering up to $115,000 for discovering novel jailbreak attacks on their AI models.

AI in Job Searches

HubSpot released a toolkit designed to help job seekers leverage AI in their search for employment opportunities.

Character AI Partners with Google

Character AI's co-founders are joining Google, with the company's CEO returning to his former employer to work on AI models at DeepMind.

Advancements in AI for Math and Video

Qwen-2-Math: A new large language model fine-tuned for mathematical tasks outperforms existing models in benchmark testing.

ByteDance's Jimang AI: TikTok's parent company debuted a new AI video generation model, though its capabilities compared to Open AI's Sora remain unclear.

Runway ML Updates: Runway introduced a new feature allowing users to specify ending frames in AI-generated videos.

Opus Clip Enhancements: The AI-powered video clipping tool added new capabilities for identifying specific content within videos.

Other AI Integrations and Developments

WordPress AI Writing Tool: Automatic launched an AI tool to improve blog readability.

Amazon Music and Audible AI Features: Both platforms are testing AI-powered content discovery features.

Reddit AI Search: The platform is testing AI-generated summaries for search results.

Google's AI-Powered TV Streamer: A new device leveraging Gemini AI for content curation is in development.

AI in Drive-Throughs: Wendy's is testing AI-powered ordering systems with improved accuracy.

Robotics Advancements: Google DeepMind showcased a table tennis-playing robot, while Nvidia demonstrated AR-controlled robotics using Apple Vision Pro.

New Humanoid Robot: Figure Robotics unveiled the Figure 02, a new humanoid robot being tested on BMW production lines.

As AI technology continues to advance, we can expect to see more innovations and integrations across various industries in the coming months, and you'll hear about them here as soon as it happens!

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Video: As Robotics and AI Merge, a Huge Competition for Progress, and Funding, is Developing...


In This Video: Major technology and robotics companies are advancing their efforts to develop and train artificial intelligence models. These AI systems are being designed to perform a wide range of roles and functions, preparing for a future where AI is integral to various industries and aspects of everyday life.

Video Courtesy of NBC News

Major Windows Security Hole Went Unpatched by Microsoft for Over a YEAR...

 

Windows Exploit

Hackers have been targeting Windows 10 and 11 users with malware for over a year, but a fix has finally arrived in the latest Windows update released on July 9th.

This vulnerability, exploited by malicious code since at least January 2023, was reported to Microsoft by researchers. It was fixed on Tuesday as part of Microsoft’s monthly patch release, tracked as CVE-2024-38112. The flaw, residing in the MSHTML engine of Windows, had a severity rating of 7.0 out of 10.

Security firm Check Point discovered the attack code, which used “novel tricks” to lure Windows users into executing remote code. One method involved a file named Books_A0UJKO.pdf.url, which appeared as a PDF in Windows but was actually a .url file designed to open an application via a link.

Internet Explorer Continues to Haunt Windows...

When viewed in Windows, these files looked like PDFs, but they opened a link that called msedge.exe (Edge browser). This link included attributes like mhtml: and !x-usc:, a trick long used by threat actors to open applications such as MS Word. Instead of opening in Edge, the link would open in Internet Explorer (IE), which is less secure and outdated.

Internet Explorer, Microsoft's infamously insecure browser has been discontinued for years, and even more previously unknown vulnerabilities are still occasionally discovered.  The point being - once a hacker has Internet Explorer open, and the ability to tell it to open a URL, they can choose from a wide variety of methods to install software, execute code, or destroy data.

IE would prompt the user with a dialog box to open the file, and if the user clicked “open,” a second dialog box appeared, vaguely warning about opening content on the Windows device. Clicking “allow” would cause IE to load a file ending in .hta, running embedded code.

Haifei Li, the Check Point researcher who discovered the vulnerability, summarized the attack methods: the first technique used the “mhtml” trick to call IE instead of the more secure Chrome/Edge. The second technique tricked users into thinking they were opening a PDF while actually executing a dangerous .hta application. The goal was to make victims believe they were opening a PDF using these two tricks.

Check Point’s report includes cryptographic hashes for six malicious .url files used in the campaign, which Windows users can use to check if they’ve been targeted.

____
Author: Stephen Hannan
New York Newsroom

Apple Pay will Soon Be An Option for iPhone Users with WINDOWS PCs - No Safari Needed; Both Chrome and Firefox to Be Supported...

Apple Pay on Windows PC Chrome Firefox

Apple is about to make waves in the digital payment space when iOS 18 goes live - support for Apple Pay on non-Safari desktop web browsers. That's right, apple pay in Windows PCs inside of Chrome or Firefox. It appears Apple's goal is to make Apple Pay something  people use for every transaction they make.

Apple demonstrated the new feature for a group of developers at WWDC 2024 who shared the details with us. 

Here's How it Will Work...

If you’re browsing the web on a desktop browser that isn’t Safari and you spot an Apple Pay button, you’re in luck, clicking that button will now generate a special QR code. Pull out your

iPhone running iOS 18, scan the code, and voila – it's like any other Apple Pay transaction from this point. This not only makes shopping more convenient but also bridges the gap between different platforms, enhancing the overall user experience.

Until Now, Apple Pay Was Only Accessible in Very Specific Situations...

Using Apple Pay outside of an Apple produced mobile device was limited to Macs computers/laptops, and even they were limited to using it only when using the Safari browser. This update really is a big change.

Regardless of your browser or operating system, you might start noticing a lot more Apple Pay buttons popping up across the web.

From a particle point of view, this option seems both faster and easier than typing in a credit card number.  But in cases where someone is using service they use often and likely has their card on file, using Apple Pay would be an additional pointless step.

Stay tuned for more updates as we inch closer to the iOS 18 release date this fall!

-----------
Author: Alex Benningram
Tech News CITY /New York Newsroom

The Latest News in AI's Evolution: 6 Recent Stories or Announcements You Need to Stay Up To Date...

AI News from TechNews.CITY

Like every week, there has been a torrent of AI news, reflecting the rapid progress and growing implications of this transformative technology. While many updates represent incremental advancements, some developments carry profound future implications, and a few are simply downright bizarre and amusing.

The AI Landscape's Overwhelming Complexity

An infographic from First Mark Capital vividly illustrates the staggering complexity of the current AI landscape. Dubbed the "2024 ML/AI/Data Landscape," the image depicts the sheer number of companies involved in this space, encompassing both established giants and numerous smaller players. This visual representation serves as a stark reminder of just how monumental and widespread the AI revolution has become.

Microsoft and OpenAI's Ambitious Data Center Plans

Unconfirmed reports suggest that Microsoft and OpenAI are planning a $100 billion data center project, which would be a staggering 100 times more costly than some of the largest existing data centers. The proposed facility would house an artificial intelligence supercomputer dubbed "Stargate." If realized, this endeavor could propel OpenAI and Microsoft to an unprecedented lead, making it challenging for other companies or open-source models to catch up.

OpenAI's Synthetic Voice Capabilities

OpenAI has unveiled its ability to generate realistic synthetic voices from a single 15-second audio sample. The quality of these AI-generated voices surpasses even the impressive capabilities of tools like Elevenlabs. However, while showcasing this remarkable feat, OpenAI has refrained from making the technology publicly available due to potential misuse concerns. The company is advocating for measures to protect individuals' voices, educate the public about AI-generated content, and develop techniques to track the origin of audiovisual media.

Advancements in AI Art and Music Generation

Several developments in AI-powered art and music generation have emerged. OpenAI has introduced an inpainting feature for its DALL-E model, allowing users to selectively modify specific areas of generated images. Stability AI has unveiled Stable Audio 2.0, enabling the generation of three-minute songs and audio-to-audio generation based on hummed or instrument sounds. However, the quality of AI-generated music remains a subject of debate, with a group of musicians, including Nicki Minaj and Katy Perry, signing a letter expressing concerns about the irresponsible use of AI in music.

Anthropic's Research and Apple's AI Ambitions

Anthropic researchers have discovered that repeatedly asking harmless questions to large language models can eventually lead them to provide potentially harmful information, a phenomenon they are actively investigating. Meanwhile, Apple appears to be deepening its involvement in AI, revealing the "Realm" language model designed to enhance voice assistants like Siri by improving context understanding and reference resolution.

Ethical Concerns and Regulatory Developments

Ethical and regulatory issues surrounding AI continue to surface. A court in Washington has banned the use of AI-enhanced video evidence, citing concerns about the potential for inaccuracies introduced by upscaling algorithms. Additionally, the company behind the AI-generated George Carlin standup comedy set has agreed to remove all related audio and video content following a settlement with Carlin's estate.

Bizarre and Amusing AI Applications

Among the more unusual AI developments, an autonomous electric scooter called the Ola Solo has been introduced in India, claiming to be the first fully self-driving scooter. In Phoenix, Waymo vehicles are now delivering Uber Eats orders, allowing customers to retrieve their food from self-driving cars. Furthermore, an upcoming season of the Netflix reality show "The Circle" will feature an AI catfish participant, adding an intriguing twist to the dating-focused premise.

As the AI news cycle continues to accelerate, it becomes increasingly evident that we are witnessing a technological revolution of unprecedented scale and impact. Stay tuned for more developments, insights, and discussions as we collectively navigate the challenges and opportunities of this "next wave" of innovation.


-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Apple Introduces All New Security Measures to Protect User Data When a Phone is Stolen...

Apple iphone new security

Apple has introduced a new safety feature in their latest iPhone software update to help protect your personal information if your iPhone is ever stolen. It's called Stolen Device Protection.

What it does:

If your iPhone is stolen, this feature makes it harder for someone else to access your private stuff on the phone, like your bank details, saved passwords, or Apple ID.

It works by requiring Face ID or Touch ID instead of just a passcode to take certain actions on the phone. For example, if your iPhone is in an unfamiliar location, the thief would need to use Face ID or Touch ID to make payments with your saved cards or log into your accounts. This helps ensure only you can access your private info, even if a thief knows your passcode.

It also makes you wait 1 hour before changing critical security settings like your Apple ID password if your phone is not in a familiar location. This gives you time to mark your device as lost and secure your account if it's stolen.

What it doesn't protect:

If a thief knows your passcode, they can still access your email and info in unprotected apps. Apple Pay will also still work with just the passcode.

How to turn it on:

First, update your iPhone to the latest software (iOS 17.3 or higher). Go to Settings > General > About to check your current iOS version.

Then go to Settings > Face ID & Passcode. You may need to enter your passcode. Look for "Stolen Device Protection" and turn on the switch so it turns green. This enables the feature.

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

AI Companies Are Breaking Their Promises - Public Safety an "Afterthought" in Race To Build More Powerful AI Models...

A new report finds that big tech companies are falling short when it comes to keeping their promises around developing artificial intelligence (AI) responsibly. Researchers at Stanford University looked into how companies that have published ethics rules and hired experts are putting those principles into practice.

What they found is concerning. Even with all the talk of "AI ethics," many companies still prioritize performance and profits over safety and responsibility when building new AI products.

The Stanford Team Interviewed 25 People Working Inside the Current Top AI Companies...

These employees said they lack support and are isolated from other teams. Product managers often see them as hurting productivity or slowing down product releases. One person said "being very loud about putting more brakes on [AI development] was a risky thing to do."

Governments, academics and the public have raised worries about issues like privacy, bias, and AI's impacts on jobs and society. Tools like chatbots are advancing very quickly, with new releases from companies like Google and OpenAI.

Promises to Develop AI Responsibly Seems to Have Been Empty Words, Meant To Calm Public Concern...

Employees within the AI companies say ethical considerations are an afterthought, happening "too late, if at all" - instead, they're told to focus on the numbers, such as user engagement and AI performance. These are the metrics that dominate decision-making, rather than equally important measures around fairness or social good.

In short, despite public commitments to ethics, tech companies are deprioritizing real accountability as they race to build the latest, most advanced artificial intelligence.

Companies Focus on Winning the Race to Release the 'Most Powerful AI' of the Moment, then Learn What it Is Capable Of...

Instead, AI development should be guided by a clear understanding of what the AI their building can and should be able to do, rather than focusing solely on maximizing profits or building the most powerful version. 

There's no downplaying the massive challenge for-profit AI companies face as they need to consider innovation, profitability, and ethics - falling short in any of these categories greatly increases the odds that a company will not survive.

It is vital that the AI industry understands it must function differently than any other segment of the tech industry, with investor satisfaction no longer the top priority. This may actually be a fairly simple change to implement, they just need to educate their investors.  Informed investors will actually demand that public safety come first, as all would regret funding a company that, for example, triggered a global forced internet shut-down because that was the only way to stop their creation from self-replicating and spreading, or worse.

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom


The DARK SIDE Of Quantum Computing...


A mini-documentary covering the 'dark side' of quantum computing, "Things the tech industry and scientists developing quantum computers won't speak about publicly".

Video via Podlink International.

AI Continues To Advance At Rapid Pace - The Top Stories from the World of AI...

AI news

The world of Artificial Intelligence (AI) is ever-evolving, and this week has been particularly groundbreaking, especially in the realm of AI art. From new features in image generation platforms to legal battles over AI-generated art, there's a lot to unpack. Here's a comprehensive update on what you need to know.

Mid-Journey's In-Painting Feature

Mid-Journey, a prominent player in the AI art space, recently rolled out its in-painting feature. This feature allows users to selectively modify specific regions of an image. For instance, you can change a character's hairstyle or clothing by simply selecting the area and inputting a prompt. The feature has been praised for its ability to produce higher quality and more detailed images when the entire image is selected with the same prompt.

What's Coming Next? According to David, the founder of Mid-Journey, the company is focusing on enhancing the in-painting features and is also prioritizing the development of version 6. This new version aims to offer more control, better text understanding, and improved resolution. However, there's no estimated release date yet.

Ideogram AI: Text to Image Revolution

Ideogram AI, developed by a team from Google Brain, UC Berkeley, CMU, and the University of Toronto, has introduced a standout feature: adding text to AI-generated images. The platform allows users to generate images based on text prompts, offering a level of quality and detail that surpasses other platforms.

Actually, we used it for this article's header image!

Leonardo AI's Anime Pastel Dream

Leonardo AI has added a new model called Anime Pastel Dream, which allows users to generate anime-style images. The model is accessible through the Leonardo app and has been praised for the quality of images it produces.

Legal Challenges in AI Art

A U.S. federal judge recently ruled that AI-generated art cannot be copyrighted if it is produced without human intervention. This decision has sparked debates and discussions about the nuances of copyright laws concerning AI-generated art.

AI in Marketing: A Partnership with HubSpot

In collaboration with HubSpot, we're offering a free report on how AI is revolutionizing marketing. The report, "AI Trends for Marketers in 2023," provides insights into how AI tools are being used to create content faster, analyze data instantly, and increase ROI.

YouTube and AI in Music

YouTube has announced a partnership with Universal Music Group to explore the ethical and responsible use of AI in the music industry. They aim to ensure fair compensation for artists and record labels.

YouTube is also testing a new feature that allows users to hum a song to search for it. Built on a machine-learning model, this feature can identify a song based on its "fingerprint" or signature melody.

Advances in Healthcare

Microsoft and Epic are collaborating to use generative AI in healthcare. They aim to improve clinician productivity, reduce manual labor-intensive processes, and advance medicine for better patient outcomes. AI is also helping paralyzed individuals communicate through brain implants, marking a significant advancement in healthcare technology.

Conclusion

AI is not just a technological marvel; it's a tool that's shaping various industries, from art and marketing to healthcare. Despite some legal and ethical challenges, the future of AI looks promising. Companies are investing heavily in AI, and it's clear that we're just scratching the surface of its potential.


-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Canada PM Justin Trudeau's DELUSIONAL Internet Power-Grab: Wants NON-CANADIAN Sites to PAY to LINK To Canadian Sites...

Canadian internet bills

Canadian Prime Minister Trudeau's government has begun to push two bills intended to regulate tech from around the world. These laws could potentially shake the very core of our democracy and the freedom of the internet, and cause countless Canadian online businesses to fail.


First up:


Bill C-18, or the 'Online News Act', requires the tech giants to cough up cash to show links to Canadian news. Google argues that it’s not fair to risk “uncapped financial liability” just for giving Canadians access to news from local publishers.


Google’s solution? Once Bill C-18 kicks in, it'll remove all Canadian links from its Search, News, and Discover services. And Meta (the artist formerly known as Facebook) will follow suit, killing off news content on Facebook and Instagram for users in Canada. Looks like Trudeau's government might've shot itself in the foot with this one.


Canadian Heritage Minister Pablo Rodriguez says that tech giants need to pay their “fair share” for news.


Seems like they’re missing the point. The digital ecosystem is a complex beast, and platforms like Google and Meta often drive huge traffic (and therefore ad revenue) to these news sites. It feels like the administration has it all wrong – instead of helping, they're hurting the very people they’re trying to protect.


The next bill:


Bill C-11, the 'Online Streaming Act', shows yet another clumsy attempt by Trudeau's government to control digital content. It demands that streaming services like Disney+, Netflix, and Spotify must “prominently promote and recommend Canadian programming,” in all official and Indigenous languages.


This puts American companies in a spot, forcing them to pick up the slack for Canadian media's unpopularity, while also having to meet diversity, equity, and inclusion targets that even the Trudeau government isn't hitting. It's a little unsettling that the government seems to think that merely talking about virtues equates to having them.


But these laws aren't about saving Canadian news, they're about controlling it. Bill C-11 lets the government regulate content across the board - TV, radio, websites, and streaming platforms. And just look at the numbers: between 2020 and 2023, federal staff requested content removal over 200 times. If that doesn't scream 'control', I don't know what does.


Most people don't head straight to news websites. They click links shared by friends, find stories through Google searches, or stumble across catchy headlines on Instagram or Facebook. These platforms direct users to lesser-known local news outlets, providing priceless visibility.


Between 2021 and 2022, Facebook reportedly drove more than 1.9 billion clicks to Canadian publishers – that's about $230M worth of free marketing. Sure, Facebook profits from this setup, but that doesn’t mean it should be targeted for extra payouts.

Trudeau and his team often complain about the loss of “independent, nonpartisan newsrooms,” blaming big tech for it. Yet, these same politicians are very active on social media, and if nonpartisan news publishes fair criticism of Trudeau, he'll label is biased with disputing any of the claims made.


Has the response caused Trudeau to rethink his strategy?


Surprisingly, not at all. His government remains stubborn, even stopping ads on Facebook rather than seeking a compromise. Trudeau needs to reevaluate his game plan. Rather than shunning big tech, he should be working towards a balanced regulatory framework that protects the internet's freedom while encouraging economic growth and innovation.



-----------
Author: Alex Benningram
Tech News CITY /New York Newsroom

Google Goes All-In On AI - Watch a 10 Minute Summary of the Google IO Event...

Google debuts multiple new AI products at this year's Google IO event - here's the important parts of the 2 hour event, cut down to 10 minutes.

Video courtesy of Google

'Godfather of AI' on AI's Potential Risk To Society...

Geoffrey Hinton is one of the leading voices in the field of AI,  he quit his job at Google over concerns about what AI could eventually lead to if unchecked.

Video courtesy of PBS Newshour

Apple Will Soon Halt Support for Older iPad Models...


According to recent reports, Apple has decided to discontinue technical support for the first and fifth generation iPads. This means that owners of these iPad models will no longer be able to receive assistance from Apple's technical support team, either over the phone or at an Apple Store.

This move is not uncommon for Apple, as they often phase out support for older devices in order to focus on their newer products. This decision may come as a disappointment to some iPad owners, but it is important to note that Apple will still continue to provide support for newer iPad models.

Video courtesy of ABC News

19 of The World's Largest Tech Companies ORDERED to REVEAL Algorithms Behind Their Latest AI Developments....

ai in europe

The European Commission is making 19 tech giants, including Amazon, Google, TikTok, and YouTube, reveal their AI algorithms under the Digital Services Act. This is a significant step towards making AI more transparent and accountable, and ultimately, improving our lives.

As we know, AI is expected to impact every aspect of our lives, from healthcare to education, to even how well we write. However, it also generates fear, such as concerns about machines becoming smarter than us or causing harm inadvertently. To avoid these risks, transparency and accountability will be crucial for AI to benefit us positively.

The EU Artificial Intelligence Act aims to achieve this goal. By sharing commercial information with regulators before using AI for sensitive practices such as hiring, companies can be held accountable for the outcomes of their algorithms. EU rules could quickly become the global standard, making this a significant development in AI regulation.

However, there's always a balance to strike when it comes to regulation. The major tech companies view AI as the next big thing, and innovation in this field is now a geopolitical race. Too much regulation could stifle progress, but at the same time, we need to make sure that companies are accountable for their algorithms' outcomes.

Companies will also need to answer any questions the commission members have about their AI projects.

\This is a significant development for AI regulation that will benefit everyone. By making AI more transparent and accountable, we can ensure that it improves our lives and avoids the potential risks.

Will They Even Have The Answers?

Interestingly, AI researchers are increasingly devoting time to understanding what AI is doing. Sometimes they can dig into the data and identify particular parameters on which the AI relies heavily. However, explaining why AI did or said something can be like explaining a magic trick without knowing the secret. 

This may be the most alarming revelation from these hearings – the creators don't always understand their creations.

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom