×
Cellphones

20 Carriers Face Call-Blocking in the US for Submitting Fake 'Robocall Mitigation Plans' (arstechnica.com) 67

"Twenty phone companies may soon have all their voice calls blocked by US carriers," reports Ars Technica, "because they didn't submit real plans for preventing robocalls on their networks." The 20 carriers include a mix of US-based and foreign voice service providers that submitted required "robocall mitigation" plans to the Federal Communications Commission about two years ago. The problem is that some of the carriers' submissions were blank pages and others were bizarre images or documents that had no relation to robocalls. The strange submissions, according to FCC enforcement orders issued Monday, included "a .PNG file depicting an indiscernible object," a document titled "Windows Printer Test Page," an image "that depicted the filer's 'Taxpayer Profile' on a Pakistani government website," and "a letter that stated: 'Unfortunately, we do not have such a documents.'"

Monday's FCC announcement said the agency's Enforcement Bureau issued orders demanding that "20 non-compliant companies show cause within 14 days as to why the FCC should not remove them from the database for deficient filings." The orders focus on the certification requirements and do not indicate whether these companies carry large amounts of robocall traffic. Each company will be given "an opportunity to cure any deficiencies in its robocall mitigation program description or explain why its certification is not deficient." After the October 30 deadline, the companies could be removed from the FCC's Robocall Mitigation Database.

Removal from the database would oblige other phone companies to block all of their calls.

Google

Google Mandates Unsubscribe Button in Emails For Those Sending Over 5,000 Daily Messages (cnbc.com) 91

Google plans to make it harder for spammers to send messages to Gmail users. From a report: The company said it will require emailers who send more than 5,000 messages per day to Gmail users to offer a one-click unsubscribe button in their messages. It will also require them to authenticate their email address, configuring their systems so they prove they own their domain name and aren't spoofing IP addresses. Alphabet-owned Google says it may not deliver messages from senders whose emails are frequently marked as spam and fall under a "clear spam rate threshold" of 0.3% of messages sent, as measured by Google's Postmaster Tools.

Google says it has signed up Yahoo to make the same changes, and they'll come into effect in February 2024. The moves highlight the ongoing fight between big tech companies and spammers who use open systems such as email to send fraudulent messages and annoy users. For years, machine learning techniques have been used to fight spam, but it remains a back-and-forth battle as spammers discover new techniques to get past filters.

Republicans

Judge Tears Apart Republican Lawsuit Alleging Bias In Gmail Spam Filter (arstechnica.com) 184

An anonymous reader quotes a report from Ars Technica: A federal judge yesterday granted Google's motion to dismiss a lawsuit filed by the Republican National Committee (RNC), which claims that Google intentionally used Gmail's spam filter to suppress Republicans' fundraising emails. An order (PDF) dismissing the lawsuit was issued yesterday by US District Judge Daniel Calabretta. The RNC is seeking "recovery for donations it allegedly lost as a result of its emails not being delivered to its supporters' inboxes," Calabretta noted. But Google correctly argued that the lawsuit claims are barred by Section 230 of the Communications Decency Act, the judge wrote. The RNC lawsuit was filed in October 2022 in US District Court for the Eastern District of California.

"While it is a close case, the Court concludes that... the RNC has not sufficiently pled that Google acted in bad faith in filtering the RNC's messages into Gmail users' spam folders, and that doing so was protected by Section 230. On the merits, the Court concludes that each of the RNC's claims fail as a matter of law for the reasons described below," he wrote. Calabretta, a Biden appointee, called it "concerning that Gmail's spam filter has a disparate impact on the emails of one political party, and that Google is aware of and has not yet been able to correct this bias." But he noted that "other large email providers have exhibited some sort of political bias" and that if Google did not filter spam, it would harm its users by subjecting them "to harmful malware or harassing messages. On the whole, Google's spam filter, though in this instance imperfect, is not morally blameworthy."

The RNC was given leave to amend another claim that alleged intentional interference with prospective economic relations under California law. The judge dismissed the claim as follows: "The RNC argues that Google's conduct was independently wrongful because '(1) it is political discrimination against the RNC, (2) it is dishonest to Google's users and the public, and (3) Google repeatedly lied about it.' As established above, political discrimination is not prohibited by California anti-discrimination laws and so Google's alleged discrimination would not be unlawful. The latter two reasons do not provide a 'determinable legal standard' under which the Court could find the conduct wrongful; they rest on a 'nebulous' theory of wrongfulness which other courts have rejected." The RNC "has failed to establish that Defendant's alleged interference constituted a separate, independently 'wrongful act' that would be an appropriate predicate offense" but "will be granted leave to amend this claim to establish that Defendant's conduct was unlawful by some legal measure," Calabretta wrote.
Google said in a statement: "We welcome the Court's finding that there are no plausible allegations that Gmail's spam filters discriminate for political purposes. We will continue investing in spam-filtering technologies that protect people from unwanted emails while still allowing senders to reach the inboxes of users who want their messages."
Australia

Australia's ISPs Will Stop Offering Free Email Addresses, to the Disgust of Older Customers (theguardian.com) 69

Remember when your email address came from your ISP?

Now the cost for small companies to offer email service "has gone up in server and administration costs," reports the Guardian, "without the economies of scale." But in Australia, this has created a problem for people like the Canberra-based customer of iiNet who's had the same email address since the 1990s... TPG — which owns brands that have historically offered email including iiNet all the way back to OzEmail — informed customers in July that it would migrate their email to a separate private service, the Messaging Company, by the end of November. Users will keep their exisiting email addresses on this service, and would get it free for the first year. After that, there will be options of paying for a service, or an ad-based free service after that. The amount to be charged from next year has not yet been decided.

The announcement was met with outrage among users of the long-running web forum Whirlpool. "It's a shitty move. My wife has never set up a Gmail or Yahoo and only ever used her iiNet email address for her business as well as personal. This screws us royally," one user said.

"Us oldies couldn't start out using Gmail etc because they weren't in existence 25 years ago," another said.

"It's a nightmare trying to change logins at many places...."

The other factor is the increasing security risk. Legacy systems, particularly those managed under a variety of absorbed companies, as with TPG, can over time become more at risk of a cybersecurity attack or breach. External providers who offer this service either in place of, or on behalf of the internet service provider are becoming seen as the more secure option....

The Australian Communications Consumer Action Network chief executive, Andrew Williams, says that ultimately internet providers getting out of the email game is a good thing because it means customers don't feel locked into one internet company...

With the rise in data breaches, and the avalanche of spam and scams, the shift offers people the opportunity of a clean email slate, according to Andrew Williams, of the Australian Communications Consumer Action Network.

The Almighty Buck

Thousands of Crypto Scammers are Enslaved by Human-Trafficking Gangsters, Says Bloomberg Reporter (bloomberg.com) 100

A Bloomberg investigative reporter wrote a new book titled Number Go Up: Inside Crypto's Wild Rise and Staggering Fall. This week Bloomberg published an excerpt that begins when the reporter received a flirtatious text message from a woman named Vicky Ho for a scam that's called "pig butchering".

"Vicky's random text had found its way to pretty much exactly the wrong target. I'd been investigating the crypto bubble for more than a year..." After a day, Vicky revealed her true love language: Bitcoin price data. She started sending me charts. She told me she'd figured out how to predict market fluctuations and make quick gains of 20% or more. The screenshots she shared showed that during that week alone she'd made $18,600 on one trade, $4,320 on another and $3,600 on a third... For days, she went on chatting without asking for me to send any money. I was supposed to be the mark, but I had to work her to con me.... Vicky sent me a link to download an app called ZBXS. It looked pretty much like other crypto-exchange apps. "New safe and stable trading market," a banner read at the top. Then Vicky gave me some instructions. They involved buying one cryptocurrency using another crypto-exchange app, then transferring the crypto to ZBXS's deposit address on the blockchain, a 42-character string of letters and numbers...

People around the world really were losing huge sums of money to the con. A project finance lawyer in Boston with terminal cancer handed over $2.5 million. A divorced mother of three in St. Louis was defrauded of $5 million. And the victims I spoke to all told me they'd been told to use Tether, the same coin Vicky suggested to me. Rich Sanders, the lead investigator at CipherBlade, a crypto-tracing firm, said that at least $10 billion had been lost to crypto romance scams.

The huge sums involved weren't the most shocking part. I learned that whoever was posing as Vicky was likely a victim as well — of human trafficking. Most "pig-butchering" operations were orchestrated by Chinese gangsters based in Cambodia or Myanmar. They'd lure young people from across Southeast Asia to move abroad with the promise of well-paying jobs in customer service or online gambling. Then, when the workers arrived, they'd be held captive and forced into a criminal racket. Thousands have been tricked this way. Entire office towers are filled with floor after floor of people sending spam messages around the clock, under threat of torture or death.

With the assistance of translators, I started video chatting with people who'd escaped...

I'd heard that [southwestern Cambodia's giant building complex] Chinatown alone held as many as 6,000 captive workers like "Vicky Ho."

Two of the workers interviewed "said they'd seen workers murdered." And another worker said Tether was used specifically because "It's more safe. We are afraid people will track us... It's untraceable."

The reporter's conclusion? "It was hard to see how this slave complex could exist without cryptocurrency."
Microsoft

Microsoft Fixes Hotmail Delivery Failures After Misconfigured SPF DNS (bleepingcomputer.com) 23

Friday Microsoft told Bleeping Computer "that they have fixed the issue and Hotmail should no longer fail SPF checks."

But earlier in the day the site reported that "Hotmail users worldwide have problems sending emails, with messages flagged as spam or not delivered after Microsoft misconfigured the domain's DNS SPF record." The email issues began late Thursday night, with users and admins reporting on Reddit, Twitter, and Microsoft forums that their Hotmail emails were failing due to SPF validation errors... The Sender Policy Framework (SPF) is an email security feature that reduces spam and prevents threat actors from spoofing domains in phishing attacks... When a mail server receives an email, it will verify that the hostname/IP address for the sending email servers is part of a domain's SPF record, and if it is, allows the email to be delivered as usual...

After analyzing what was causing email delivery errors, admins noted that Microsoft removed the 'include:spf.protection.outlook.com' record from hotmail.com's SPF record.

Thanks to long-time Slashdot reader Archangel Michaelfor sharing the news.
Youtube

YouTube is Deactivating Links in Shorts Videos To Combat Spam (engadget.com) 54

YouTube knows that it has a spam problem, particularly when it comes to its two-year-old Shorts feature. In an attempt to do something about it, the streamer has announced it's deactivating links in Shorts descriptions, comments and the vertical live feed. From a report: YouTube is also taking away the ability to click on social media icons on any desktop channel banners. The new changes will start to roll out on August 31st. Though YouTube claims it won't continue its "unclickable" crusade, but it adds, "Because abuse tactics evolve quickly, we have to take preventative measures to make it harder for scammers and spammers to mislead or scam users via links."

At the same time, YouTube is adding new links on creators' channels, with a big clickable link appearing by the Subscribe button starting August 23rd. The link can bring users to anything from merchandise sites to social media accounts. The platform also recently introduced more creator tools for Shorts, like voiceovers. However, it won't be until at least the end of September that the streamer introduces "safer" ways to guide people from their Shorts back to the rest of their content.

Games

Ubisoft Will Suspend and Then Delete Long-Inactive Accounts (pcgamer.com) 51

Leaving a Ubisoft account inactive for too long "apparently puts it at risk of permanent deletion," writes PC Gamer, calling the policy "a customer-unfriendly practice." A piracy and anti-DRM focused Twitter account, PC_enjoyer, recently shared a screenshot of a Ubisoft support email telling the user that their Ubisoft account had been suspended for "inactivity," and would be "permanently closed" after 30 days. The email provided a link to cancel the move. Now, that sounds like a phishing scam, right? I and many commenters wondered that, looking at the original post, but less than a day later, Ubisoft's verified support account responded to the tweet, seemingly confirming the screenshotted email's legitimacy.

"You can avoid the account closure by logging into your account within the 30 days (since receiving the email pictured) and selecting the Cancel Account Closure link contained in the email," Ubisoft Support wrote. "We certainly do not want you to lose access to your games or account so if you have any difficulties logging in then please create a support case with us."

I was unable to find anything regarding account closure for inactivity in Ubisoft's US terms of use or its end user licence agreement, but the company does reserve the right to suspend or end services at any time. Ubisoft has a support page titled "Closure of inactive Ubisoft accounts." The page first describes instances where the service clashes with local data privacy laws, then reads: "We may also close long-term inactive accounts to maintain our database. You will be notified by email if we begin the process of closing your inactive account."

This page links to another dedicated to voluntarily closing one's Ubisoft account, and seems to operate by the same rules: a 30-day suspension before permanent deletion. "As we will be unable to recover the account once it has been closed, we strongly recommend only putting in the request if you are absolutely sure you would like to close your account."

"If you have a good spam filter or just reasonably assume it's a phishing attempt, then you might one day try your old games and find they're just gone," worries long-time Slashdot reader Baron_Yam. "If you're someone who still plays games from decades ago every so often, this is a scenario you might want to think about."

The site Eurogamer reports that when a Twitter user complained that "I lost my Ubisoft account, and all the Ubisoft Steam game[s] I've bought are now useless", Ubisoft Support "responded to say that players can raise a ticket if they would like to recover their account."

The original tweet now includes this "reader-added context" supplied by other Twitter users — along with three informative links: For added context, Ubisoft can be required under certain data protection laws, such as the GDPR, to close inactive accounts if they deem the data no longer necessary for collection.

Ubisoft has claimed they don't close accounts that are inactive for less than 4 years.

Open Source

'Meta's Newly Released Large Language Model Llama-2 Is Not Open Source' 27

Earlier this week, Meta announced it has teamed up with Microsoft to launch Llama 2, its "open-source" large language model (LLM) that uses artificial intelligence to generate text, images, and code. In an opinion piece for The Register, long-time ZDNet contributor and technology analyst, Steven J. Vaughan-Nichols, writes: "Meta is simply open source washing an open but ultimately proprietary LLM." From the report: As Amanda Brock, CEO of OpenUK, said, it's "not an OSI approved license but a significant release of Open Technology ... This is a step to moving AI from the hands of the few to the many, democratizing technology and building trust in its use and future through transparency." And for many developers, that may be enough. [...] But the devil is in the details when it comes to open source. And there, Meta, with its Llama 2 Community License Agreement, falls on its face. As The Register noted earlier, the community agreement forbids the use of Llama 2 to train other language models; and if the technology is used in an app or service with more than 700 million monthly users, a special license is required from Meta. Stefano Maffulli, the OSI's executive director, explained: "While I'm happy that Meta is pushing the bar of available access to powerful AI systems, I'm concerned about the confusion by some who celebrate LLaMa 2 as being open source: if it were, it wouldn't have any restrictions on commercial use (points 5 and 6 of the Open Source Definition). As it is, the terms Meta has applied only allow some commercial use. The keyword is some."

Maffulli then dove in deeper. "Open source means that developers and users are able to decide for themselves how and where to use the technology without the need to engage with another party; they have sovereignty over the technology they use. When read superficially, Llama's license says, 'You can't use this if you're Amazon, Google, Microsoft, Bytedance, Alibaba, or your startup grows as big.' It may sound like a reasonable clause, but it also implicitly says, 'You need to ask us for permission to create a tool that may solve world hunger' or anything big like that." Stephen O'Grady, open source licensing expert and RedMonk co-founder, explained it like this: "Imagine if Linux was open source unless you worked at Facebook." Exactly. Maffulli concluded: "That's why open source has never put restrictions on the field of use: you can't know beforehand what can happen in the future, good or bad."

The OSI isn't the only open-source-savvy group that's minding the Llama 2 license. Karen Sadler, lawyer and executive director at the Software Freedom Conservancy, dug into the license's language and found that "the Additional Commercial Terms in section 2 of the license agreement, which is a limitation on the number of users, makes it non-free and not open source." To Sadler, "it looks like Meta is trying to push a license that has some trappings of an open source license but, in fact, has the opposite result. Additionally, the Acceptable Use Policy, which the license requires adherence to, lists prohibited behaviors that are very expansively written and could be very subjectively applied -- if you send out a mass email, could it be considered spam? If there's reasonably critical material published, would it be considered defamatory?" Last, but far from least, she "didn't notice any public drafting or comment process for this license, which is necessary for any serious effort to introduce a new license."
AI

AI Junk Is Starting To Pollute the Internet (wsj.com) 55

Online publishers are inundated with useless article pitches as websites using AI-generated content multiply. From a report: When she first heard of the humanlike language skills of the artificial-intelligence bot ChatGPT, Jennifer Stevens wondered what it would mean for the retirement magazine she edits. Months later, she has a better idea. It means she is spending a lot of time filtering out useless article pitches. People like Stevens, the executive editor of International Living, are among those seeing a growing amount of AI-generated content that is so far beneath their standards that they consider it a new kind of spam.

The technology is fueling an investment boom. It can answer questions, produce images and even generate essays based on simple prompts. Some of these techniques promise to enhance data analysis and eliminate mundane writing tasks, much as the calculator changed mathematics. But they also show the potential for AI-generated spam to surge and potentially spread across the internet. In early May, the news site rating company NewsGuard found 49 fake news websites that were using AI to generate content. By the end of June, the tally had hit 277, according to Gordon Crovitz, the company's co-founder. "This is growing exponentially," Crovitz said. The sites appear to have been created to make money through Google's online advertising network, said Crovitz, formerly a columnist and a publisher at The Wall Street Journal.

Researchers also point to the potential of AI technologies being used to create political disinformation and targeted messages used for hacking. The cybersecurity company Zscaler says it is too early to say whether AI is being used by criminals in a widespread way, but the company expects to see it being used to create high-quality fake phishing webpages, which are designed to trick victims into downloading malicious software or disclosing their online usernames and passwords. On YouTube, the ChatGPT gold rush is in full swing. Dozens of videos offering advice on how to make money from OpenAI's technology have been viewed hundreds of thousands of times. Many of them suggest questionable schemes involving junk content. Some tell viewers that they can make thousands of dollars a week, urging them to write ebooks or sell advertising on blogs filled with AI-generated content that could then generate ad revenue by popping up on Google searches.

Facebook

Why the Early Success of Threads May Crash Into Reality (nytimes.com) 175

Mark Zuckerberg has used Meta's might to push Threads to a fast start -- but that may only work up to a point. Mike Isaac, writing at The New York Times: A big tech company with billions of users introduces a new social network. Leveraging the popularity and scale of its existing products, the company intends to make the new social platform a success. In doing so, it also plans to squash a leading competitor's app. If this sounds like Instagram's new Threads app and its push against its rival Twitter, think again. The year was 2011 and Google had just rolled out a social network called Google+, which was aimed as its "Facebook killer." Google thrust the new site in front of many of its users who relied on its search and other products, expanding Google+ to more than 90 million users within the first year.

But by 2018, Google+ was relegated to the ash heap of history. Despite the internet search giant's enormous audience, its social network failed to catch on as people continued flocking to Facebook -- and later to Instagram and other social apps. In the history of Silicon Valley, big tech companies have often become even bigger tech companies by using their scale as a built-in advantage. But as Google+ shows, bigness alone is no guarantee of winning the fickle and faddish social media market.

This is the challenge that Zuckerberg, the chief executive of Meta, which owns Instagram and Facebook, now faces as he tries to dislodge Twitter and make Threads the prime app for real-time, public conversations. If tech history is any guide, size and scale are solid footholds -- but ultimately can only go so far. What comes next is much harder. Mr. Zuckerberg needs people to be able to find friends and influencers on Threads in the serendipitous and sometimes weird ways that Twitter managed to accomplish. He needs to make sure Threads isn't filled with spam and grifters. He needs people to be patient about app updates that are in the works.

Social Networks

As BotDefense Leaves 'Antagonistic' Reddit, Mods Fear Spam Overload (arstechnica.com) 68

"The Reddit community is still reckoning with the consequences of the platform's API price hike..." reports Ars Technica.

"The latest group to announce its departure is BotDefense." BotDefense, which helps remove rogue submission and comment bots from Reddit and which is maintained by volunteer moderators, is said to help moderate 3,650 subreddits. BotDefense's creator told Ars Technica that the team is now quitting over Reddit's "antagonistic actions" toward moderators and developers, with concerning implications for spam moderation on some large subreddits like r/space.

BotDefense started in 2019 as a volunteer project and has been run by volunteer mods, known as "dequeued" and "abrownn" on Reddit. Since then, it claims to have populated its ban list with 144,926 accounts, and it helps moderate subreddits with huge followings, like r/gaming (37.4 million members), /r/aww (34.2 million), r/music (32.4 million), r/Jokes (26.2 million), r/space (23.5 million), and /r/LifeProTips (22.2 million). Dequeued told Ars that other large subreddits BotDefense helps moderates include /r/food, /r/EarthPorn, /r/DIY, and /r/mildlyinteresting. On Wednesday, dequeued announced that BotDefense is ceasing operations. BotDefense has already stopped accepting bot account submissions and will disable future action on bots. BotDefense "will continue to review appeals and process unbans for a minimum of 90 days or until Reddit breaks the code running BotDefense," the announcement said...

Dequeued, who said they've been moderating for nearly nine years, said Reddit's "antagonistic actions" toward devs and mods are the only reason BotDefense is closing. The moderator said there were plans for future tools, like a new machine learning system for detecting "many more" bots. Before the API battle turned ugly, dequeued had no plans to stop working on BotDefense...

[S]ubreddits that have relied on BotDefense are uncertain about managing their subreddits without the tool, and the tool's impending departure are new signs of a deteriorating Reddit community.

Ironically, Reddit's largest shareholder — Advance Publications — owns Ars Technica's parent company Conde Naste.

The article notes that Reddit "didn't respond to Ars' request for comment on BotDefense closing, how Reddit fights spam bots and karma farms, or about users quitting Reddit."
Social Networks

AMAs Are the Latest Casualty In Reddit's API War (arstechnica.com) 179

An anonymous reader quotes a report from Ars Technica: Ask Me Anything (AMA) has been a Reddit staple that helped popularize the social media platform. It delivered some unique, personal, and, at times, fiery interviews between public figures and people who submitted questions. The Q&A format became so popular that many people host so-called AMAs these days, but the main subreddit has been r/IAmA, where the likes of then-US President Barack Obama and Bill Gates have sat in the virtual hot seat. But that subreddit, which has been called its own "juggernaut of a media brand," is about to look a lot different and likely less reputable. On July 1, Reddit moved forward with changes to its API pricing that has infuriated a large and influential portion of its user base. High pricing and a 30-day adjustment period resulted in many third-party Reddit apps closing and others moving to paid-for models that developers are unsure are sustainable.

The latest casualty in the Reddit battle has a profound impact on one of the most famous forms of Reddit content and signals a potential trend in Reddit content changing for the worse. On Saturday, the r/IAmA moderators announced that they will no longer perform these duties:

- Active solicitation of celebrities or high-profile figures to do AMAs.
- Email and modmail coordination with celebrities and high-profile figures and their PR teams to facilitate, educate, and operate AMAs. (We will still be available to answer questions about posting, though response time may vary).
- Running and maintaining a website for scheduling of AMAs with pre-verification and proof, as well as social media promotion.
- Maintaining a current up-to-date sidebar calendar of scheduled AMAs, with schedule reminders for users.
- Sister subreddits with categorized cross-posts for easy following.
- Moderator confidential verification for AMAs.
- Running various bots, including automatic flairing of live posts

The subreddit, which has 22.5 million subscribers as of this writing, will still exist, but its moderators contend that most of what makes it special will be undermined. "Moving forward, we'll be allowing most AMA topics, leaving proof and requests for verification up to the community, and limiting ourselves to removing rule-breaking material alone. This doesn't mean we're allowing fake AMAs explicitly, but it does mean you'll need to pay more attention," the moderators said. The mods will also continue to do bare minimum tasks like keeping spam out and rule enforcement, they said. Like many other Reddit moderators Ars has spoken to, some will step away from their duties, and they'll reportedly be replaced "as needed."

AI

'AI is Killing the Old Web' 108

Rapid changes, fueled by AI, are impacting the large pockets of the internet, argues a new column. An excerpt: In recent months, the signs and portents have been accumulating with increasing speed. Google is trying to kill the 10 blue links. Twitter is being abandoned to bots and blue ticks. There's the junkification of Amazon and the enshittification of TikTok. Layoffs are gutting online media. A job posting looking for an "AI editor" expects "output of 200 to 250 articles per week." ChatGPT is being used to generate whole spam sites. Etsy is flooded with "AI-generated junk."

Chatbots cite one another in a misinformation ouroboros. LinkedIn is using AI to stimulate tired users. Snapchat and Instagram hope bots will talk to you when your friends don't. Redditors are staging blackouts. Stack Overflow mods are on strike. The Internet Archive is fighting off data scrapers, and "AI is tearing Wikipedia apart." The old web is dying, and the new web struggles to be born. The web is always dying, of course; it's been dying for years, killed by apps that divert traffic from websites or algorithms that reward supposedly shortening attention spans. But in 2023, it's dying again -- and, as the litany above suggests, there's a new catalyst at play: AI.
Social Networks

Is Reddit Dying? (eff.org) 266

"Compared to the website's average daily volume over the past month, the 52,121,649 visits Reddit saw on June 13th represented a 6.6 percent drop..." reports Engadget (citing data provided by internet analytics firm Similarweb). [A]s many subreddits continue to protest the company's plans and its leadership contemplates policy changes that could change its relationship with moderators, the platform could see a slow but gradual decline in daily active users. That's unlikely to bode well for Reddit ahead of its planned IPO and beyond.
In fact, the Financial Times now reports that Reddit "acknowledged that several advertisers had postponed certain premium ad campaigns in order to wait for the blackouts to pass." But they also got this dire prediction from a historian who helps moderate the subreddit "r/Askhistorians" (with 1.8 million subscribers).

"If they refuse to budge in any way I do not see Reddit surviving as it currently exists. That's the kind of fire I think they're playing with."

More people had the same same thought. The Reddit protests drew this response earlier this week from EFF's associate director of community organizing: This tension between these communities and their host have, again, fueled more interest in the Fediverse as a decentralized refuge... Unfortunately, discussions of Reddit-like fediverse services Lemmy and Kbin on Reddit were colored by paranoia after the company banned users and subreddits related to these projects (reportedly due to "spam"). While these accounts and subreddits have been reinstated, the potential for censorship around such projects has made a Reddit exodus feel more urgently necessary...
Saturday the EFF official reiterated their concerns when Wired asked: does this really signal the death of Reddit? "I can't see it as anything but that... [I]t's not a big collapse when a social media website starts to die, but it is a slow attrition unless they change their course. The longer they stay in their position, the more loss of users and content they're going to face."

Wired even heard a thought-provoking idea from Amy Bruckman, a regents' professor/senior associate chair at the School of Interactive Computing at Georgia Institute of Technology. Bruckman "advocates for public funding of a nonprofit version of something akin to Reddit."

Meanwhile, hundreds of people are now placing bets on whether Reddit will backtrack on its new upcoming API pricing — or oust CEO Steve Huffman — according to Insider, citing reports from online betting company BetUS.

CEO Huffman's complaint that the moderators were ignoring the wishes of Reddit's users led to a funny counter-response, according to the Verge. After asking users to vote on whether to end the protest, two forums saw overwhelming support instead for the only offered alternative: the subreddits "now only allow posts about comedian and Last Week Tonight host John Oliver."

Both r/pics (more than 30 million subscribers) and r/gifs (more than 21 million subscribers) offered two options to users to vote on... The results were conclusive:

r/pics: return to normal, -2,329 votes; "only allow images of John Oliver looking sexy," 37,331 votes.
r/gifs: return to normal, -1,851 votes; only feature GIFs of John Oliver, 13,696 votes...

On Twitter, John Oliver encouraged the subreddits — and even gave them some fodder. "Dear Reddit, excellent work," he wrote to kick off a thread that included several ridiculous pictures. A spokesperson for Last Week Tonight with John Oliver didn't immediately reply to a request for comment.

Social Networks

Reddit Fight 'Enters News Phase', as Moderators Vow to Pressure Advertisers, CNN Reports (cnn.com) 158

Reddit "appears to be laying the groundwork for ejecting forum moderators committed to continuing the protests," CNN reported Friday afternoon, "a move that could force open some communities that currently remain closed to the public.

"In response, some moderators have vowed to put pressure on Reddit's advertisers and investors." As of Friday morning, nearly 5,000 subreddits were still set to private and inaccessible to the public, reflecting a modest decrease from earlier in the week but still including groups such as r/funny, which claims more than 40 million subscribers, and r/aww and r/music, each with more than 30 million members. But Reddit has portrayed the blacked-out communities as a small slice of its wider platform. Some 100,000 forums remain open, the company said in a blog post, including 80% of its 5,000 most actively engaged subreddits...

Reddit CEO and co-founder Steve Huffman told NBC News the company will soon allow forum users to overrule moderators by voting them out of their positions, a change that may enable communities that do not wish to remain private to reopen. In addition, one company administrator said Thursday, Reddit may soon view communities that remain private as an indicator that the moderators of those communities no longer wish to moderate. That would constitute a form of inactivity for which the moderators can be removed, the company said. "If a moderator team unanimously decides to stop moderating, we will invite new, active moderators to keep these spaces open and accessible to users," the administrator said, adding that Reddit may intervene even if most moderators on a team wish to remain closed and only a single moderator wants to reopen...

Omar, a moderator of a subreddit participating in this week's blackout, told CNN Friday that many subreddits have participated in the blackouts based on member polls that indicate strong support for the protests... Content moderation on Reddit stands to worsen if the company continues with its plan, Omar said, warning that the coming changes will deter developers from creating and maintaining tools that Reddit communities rely on to detect and eliminate spam, hate speech or even child sexual abuse material. "That's both harmful for users and advertisers," Omar said, adding that supporters of the protests have been contacting advertisers to explain how the platform's coming changes may hurt brands. Already, Omar said, the blackout has made it harder for companies to target ads to interest groups; video game companies, for example, can no longer target ads to gaming-focused subreddits that have taken themselves private...

Huffman has also said that the protests have had little impact on the company financially.

NBC News adds: In an interview Thursday with NBC News, Reddit CEO Steve Huffman praised Musk's aggressive cost-cutting and layoffs at Twitter, and said he had chatted "a handful of times" with Musk on the subject of running an internet platform. Huffman said he saw Musk's handling of Twitter, which he purchased last year, as an example for Reddit to follow.
AI

The Problem with the Matrix Theory of AI-Assisted Human Learning (nytimes.com) 28

In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems "will do more to distract and entertain than to focus." (Since they tend to "hallucinate" inaccuracies, and may first be relegated to areas "where reliability isn't a concern" like videogames, song mash-ups, children's shows, and "bespoke" images.)

"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?

You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."

But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...

The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.

We failed that test with the internet. Let's not fail it with A.I.

The Internet

Phishing Domains Tanked After Meta Sued Freenom (krebsonsecurity.com) 7

An anonymous reader quotes a report from KrebsOnSecurity: The number of phishing websites tied to domain name registrar Freenom dropped precipitously in the months surrounding a recent lawsuit from social networking giant Meta, which alleged the free domain name provider has a long history of ignoring abuse complaints about phishing websites while monetizing traffic to those abusive domains. Freenom is the domain name registry service provider for five so-called "country code top level domains" (ccTLDs), including .cf for the Central African Republic; .ga for Gabon; .gq for Equatorial Guinea; .ml for Mali; and .tk for Tokelau. Freenom has always waived the registration fees for domains in these country-code domains, but the registrar also reserves the right to take back free domains at any time, and to divert traffic to other sites -- including adult websites. And there are countless reports from Freenom users who've seen free domains removed from their control and forwarded to other websites.

By the time Meta initially filed its lawsuit in December 2022, Freenom was the source of well more than half of all new phishing domains coming from country-code top-level domains. Meta initially asked a court to seal its case against Freenom, but that request was denied. Meta withdrew its December 2022 lawsuit and re-filed it in March 2023. "The five ccTLDs to which Freenom provides its services are the TLDs of choice for cybercriminals because Freenom provides free domain name registration services and shields its customers' identity, even after being presented with evidence that the domain names are being used for illegal purposes," Meta's complaint charged. "Even after receiving notices of infringement or phishing by its customers, Freenom continues to license new infringing domain names to those same customers." Meta pointed to research from Interisle Consulting Group, which discovered in 2021 and again last year that the five ccTLDs operated by Freenom made up half of the Top Ten TLDs most abused by phishers.

Interisle partner Dave Piscitello said something remarkable has happened in the months since the Meta lawsuit. "We've observed a significant decline in phishing domains reported in the Freenom commercialized ccTLDs in months surrounding the lawsuit," Piscitello wrote on Mastodon. "Responsible for over 60% of phishing domains reported in November 2022, Freenom's percentage has dropped to under 15%." Piscitello said it's too soon to tell the full impact of the Freenom lawsuit, noting that Interisle's sources of spam and phishing data all have different policies about when domains are removed from their block lists.

IT

Google Drive Gets a Desperately Needed 'Spam' Folder for Shared Files (arstechnica.com) 9

Fifteen years after launching Google Docs and Sheets with file sharing, Google is adding what sounds like adequate safety controls to the feature. From a report: Google Drive (the file repository interface that contains your Docs, Sheets, and Slides files) is finally getting a spam folder and algorithmic spam filters, just like Gmail has. It sounds like the update will provide a way to limit Drive's unbelievably insecure behavior of allowing random people to add files to your Drive account without your consent or control. Because Google essentially turned Drive file-sharing into email, Google Drive needs every spam control that Gmail has. Anyone with your email address can "share" a file with you, and a ton of spammers already have your email address. Previously, Drive assumed that all shared files were legitimate and wanted, with the only "control" being "security by obscurity" and hoping no one else knew your email address.

Drive shows any shared files in your shared documents folder, notifies you of the share on your phone, highlights the "new recent file" at the top of the Drive interface, lists the file in searches, and sends you an email about it, all without any indication that you know the file sharer at all. For years, some people in my life have been inundated with shared Google Drive files containing porn, ads, dating site scams, and malware. For a long time, there was nothing you could do to support affected users other than disabling Drive notifications, telling them to ignore the highlighted porn ads at the top of their Drive account, and warning them to never click on the "shared files" folder.

Technology

Truecaller Aims To Help WhatsApp Users Combat Spam (reuters.com) 10

Truecaller will soon start making its caller identification service available over WhatsApp and other messaging apps to help users spot potential spam calls over the internet, the company told Reuters on Monday. From a report: The feature, currently in beta phase, will be rolled out globally later in May, Truecaller Chief Executive Alan Mamedi said. Telemarketing and scamming calls have been on the rise in countries like India, where users gets about 17 spam calls per month on average, according to a 2021 report by Truecaller. "Over the last two weeks, we have seen a spike in user reports from India about spam calls over WhatsApp," Mamedi said, noting that telemarketers switching to internet calling was fairly new to the market.

Slashdot Top Deals