×
Medicine

Scientists Document First-Ever Transmitted Alzheimer's Cases, Tied To No-Longer-Used Medical Procedure (statnews.com) 30

Andrew Joseph, writing for STAT News: There was something odd about these Alzheimer's cases. Part of it was the patients' presentations: Some didn't have the classic symptoms of the condition. But it was also that the patients were in their 40s and 50s, even their 30s, far younger than people who normally develop the disease. They didn't even have the known genetic mutations that can set people on the course for such early-onset Alzheimer's. But this small handful of patients did share a particular history. As children, they had received growth hormone taken from the brains of human cadavers, which used to be a treatment for a number of conditions that caused short stature.

Now, decades later, they were showing signs of Alzheimer's. In the interim, scientists had discovered that that type of hormone treatment they got could unwittingly transfer bits of protein into recipients' brains. In some cases, it had induced a fatal brain disease called Creutzfeldt-Jakob disease, or CJD -- a finding that led to the banning of the procedure 40 years ago. It seemed that it wasn't just the proteins behind CJD that could get transferred. As the scientific team treating the patients reported Monday in the journal Nature Medicine, the hormone transplant seeded the beta-amyloid protein that's a hallmark of Alzheimer's in some recipients' brains, which, decades later, propagated into disease-causing plaques. They are the first known cases of transmitted Alzheimer's disease, likely a scientific anomaly yet a finding that adds another wrinkle to ongoing arguments about what truly causes Alzheimer's. "It looks real that some of these people developed early-onset Alzheimer's because of that [hormone treatment]," said Ben Wolozin, an expert on neurodegenerative diseases at Boston University's medical school, who was not involved in the study.

Medicine

Exercising 25 Minutes a Week Increases Brain Volume - and May Slow Memory Decline (calgarysun.com) 62

"Exercising for 25 minutes a week, or less than four minutes a day, could help to bulk up our brains," reports the Washington Post, "and improve our ability to think as we grow older." A new study, which involved scanning the brains of more than 10,000 healthy men and women from ages 18 to 97, found that those who walked, swam, cycled or otherwise worked out moderately for 25 minutes a week had bigger brains than those who didn't, whatever their ages.

Bigger brains typically mean healthier brains. The differences were most pronounced in parts of the brain involved with thinking and memory, which often shrink as we age, contributing to risks for cognitive decline and dementia... The results have practical implications, too, about which types of exercise seem best for our brain health and how little of that exercise we may really need.

The article notes that the researchers used AI to assess brain scans from 10,125 "mostly healthy adults of all ages who'd come to the university medical center for diagnostic tests... A clear pattern quickly emerged." Men and women, of any age, who exercised for at least 25 minutes a week showed mostly greater brain volume than those who didn't. The differences weren't huge but were significant, said Cyrus A. Raji, an associate professor of radiology and neurology at Washington University in St. Louis, who led the new study, especially when the researchers looked deeper inside the organ. There, they found that exercisers possessed greater volume in every type of brain tissue, including gray matter, made up of neurons, and white matter, the brain's wiring infrastructure, which supports and connects the thinking cells. More granularly, the exercisers tended to have a larger hippocampus, a portion of the brain essential for memory and thinking. It usually shrinks and shrivels as we age, affecting our ability to reason and recall. They also showed larger frontal, parietal and occipital lobes, which, together, signal a healthy, robust brain...

Exactly how exercise might be altering brains is impossible to say from this study. But Raji and his colleagues believe exercise reduces inflammation in the brain and also encourages the release of various neurochemicals that promote the creation of new brain cells and blood vessels. In effect, exercise seems to help build and bank a "structural brain reserve," he said, a buffer of extra cells and matter that could protect us somewhat from the otherwise inevitable decline in brain size and function that occurs as we age. Our brains may still shrink and sputter over the years. But, if we exercise, this slow fall starts from a higher baseline...

Medicine

New Blood Test That Screens For Alzheimer's May Be a Step Closer To Reality, Study Suggests (cnn.com) 75

Testing a person's blood for a type of protein called phosphorylated tau, or p-tau, could be used to screen for Alzheimer's disease with "high accuracy," even before symptoms begin to show, a new study suggests. CNN: The study involved testing blood for a key biomarker of Alzheimer's called p-tau217, which increases at the same time as other damaging proteins -- beta amyloid and tau -- build up in the brains of people with the disease. Currently, to identify the buildup of beta amyloid and tau in the brain, patients undergo a brain scan or spinal tap, which often can be inaccessible and costly. But this simple blood test was found to be up to 96% accurate in identifying elevated levels of beta amyloid and up to 97% accurate in identifying tau, according to the study published Monday in the journal JAMA Neurology.

"What was impressive with these results is that the blood test was just as accurate as advanced testing like cerebrospinal fluid tests and brain scans at showing Alzheimer's disease pathology in the brain," Nicholas Ashton, a professor of neurochemistry at the University of Gothenburg in Sweden and one of the study's lead authors, said in an email. The study findings came as no surprise to Ashton, who added that the scientific community has known for several years that using blood tests to measure tau or other biomarkers has the potential to assess Alzheimer's disease risk. "Now we are close to these tests being prime-time and this study shows that," he said. Alzheimer's disease, a brain disorder that affects memory and thinking skills, is the most common type of dementia, according to the National Institutes of Health.

Math

How Much of the World Is It Possible to Model? 45

Dan Rockmore, the director of the Neukom Institute for Computational Sciences at Dartmouth College, writing for The New Yorker: Recently, statistical modelling has taken on a new kind of importance as the engine of artificial intelligence -- specifically in the form of the deep neural networks that power, among other things, large language models, such as OpenAI's G.P.T.s. These systems sift vast corpora of text to create a statistical model of written expression, realized as the likelihood of given words occurring in particular contexts. Rather than trying to encode a principled theory of how we produce writing, they are a vertiginous form of curve fitting; the largest models find the best ways to connect hundreds of thousands of simple mathematical neurons, using trillions of parameters.They create a vast data structure akin to a tangle of Christmas lights whose on-off patterns attempt to capture a chunk of historical word usage. The neurons derive from mathematical models of biological neurons originally formulated by Warren S. McCulloch and Walter Pitts, in a landmark 1943 paper, titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." McCulloch and Pitts argued that brain activity could be reduced to a model of simple, interconnected processing units, receiving and sending zeros and ones among themselves based on relatively simple rules of activation and deactivation.

The McCulloch-Pitts model was intended as a foundational step in a larger project, spearheaded by McCulloch, to uncover a biological foundation of psychiatry. McCulloch and Pitts never imagined that their cartoon neurons could be trained, using data, so that their on-off states linked to certain properties in that data. But others saw this possibility, and early machine-learning researchers experimented with small networks of mathematical neurons, effectively creating mathematical models of the neural architecture of simple brains, not to do psychiatry but to categorize data. The results were a good deal less than astonishing. It wasn't until vast amounts of good data -- like text -- became readily available that computer scientists discovered how powerful their models could be when implemented on vast scales. The predictive and generative abilities of these models in many contexts is beyond remarkable. Unfortunately, it comes at the expense of understanding just how they do what they do. A new field, called interpretability (or X-A.I., for "explainable" A.I.), is effectively the neuroscience of artificial neural networks.

This is an instructive origin story for a field of research. The field begins with a focus on a basic and well-defined underlying mechanism -- the activity of a single neuron. Then, as the technology scales, it grows in opacity; as the scope of the field's success widens, so does the ambition of its claims. The contrast with climate modelling is telling. Climate models have expanded in scale and reach, but at each step the models must hew to a ground truth of historical, measurable fact. Even models of covid or elections need to be measured against external data. The success of deep learning is different. Trillions of parameters are fine-tuned on larger and larger corpora that uncover more and more correlations across a range of phenomena. The success of this data-driven approach isn't without danger. We run the risk of conflating success on well-defined tasks with an understanding of the underlying phenomenon -- thought -- that motivated the models in the first place.

Part of the problem is that, in many cases, we actually want to use models as replacements for thinking. That's the raison detre of modelling -- substitution. It's useful to recall the story of Icarus. If only he had just done his flying well below the sun. The fact that his wings worked near sea level didn't mean they were a good design for the upper atmosphere. If we don't understand how a model works, then we aren't in a good position to know its limitations until something goes wrong. By then it might be too late. Eugene Wigner, the physicist who noted the "unreasonable effectiveness of mathematics," restricted his awe and wonder to its ability to describe the inanimate world. Mathematics proceeds according to its own internal logic, and so it's striking that its conclusions apply to the physical universe; at the same time, how they play out varies more the further that we stray from physics. Math can help us shine a light on dark worlds, but we should look critically, always asking why the math is so effective, recognizing where it isn't, and pushing on the places in between.
Education

'A Groundbreaking Study Shows Kids Learn Better On Paper, Not Screens. Now What?' (theguardian.com) 130

In an opinion piece for the Guardian, American journalist and author John R. MacArthur discusses the alarming decline in reading skills among American youth, highlighted by a Department of Education survey showing significant drops in text comprehension since 2019-2020, with the situation worsening since 2012. While remote learning during the pandemic and other factors like screen-based reading are blamed, a new study by Columbia University suggests that reading on paper is more effective for comprehension than reading on screens, a finding not yet widely adopted in digital-focused educational approaches. From the report: What if the principal culprit behind the fall of middle-school literacy is neither a virus, nor a union leader, nor "remote learning"? Until recently there has been no scientific answer to this urgent question, but a soon-to-be published, groundbreaking study from neuroscientists at Columbia University's Teachers College has come down decisively on the matter: for "deeper reading" there is a clear advantage to reading a text on paper, rather than on a screen, where "shallow reading was observed." [...] [Dr Karen Froud] and her team are cautious in their conclusions and reluctant to make hard recommendations for classroom protocol and curriculum. Nevertheless, the researchers state: "We do think that these study outcomes warrant adding our voices ... in suggesting that we should not yet throw away printed books, since we were able to observe in our participant sample an advantage for depth of processing when reading from print."

I would go even further than Froud in delineating what's at stake. For more than a decade, social scientists, including the Norwegian scholar Anne Mangen, have been reporting on the superiority of reading comprehension and retention on paper. As Froud's team says in its article: "Reading both expository and complex texts from paper seems to be consistently associated with deeper comprehension and learning" across the full range of social scientific literature. But the work of Mangen and others hasn't influenced local school boards, such as Houston's, which keep throwing out printed books and closing libraries in favor of digital teaching programs and Google Chromebooks. Drunk on the magical realism and exaggerated promises of the "digital revolution," school districts around the country are eagerly converting to computerized test-taking and screen-reading programs at the precise moment when rigorous scientific research is showing that the old-fashioned paper method is better for teaching children how to read.

Indeed, for the tech boosters, Covid really wasn't all bad for public-school education: "As much as the pandemic was an awful time period," says Todd Winch, the Levittown, Long Island, school superintendent, "one silver lining was it pushed us forward to quickly add tech supports." Newsday enthusiastically reports: "Island schools are going all-in on high tech, with teachers saying they are using computer programs such as Google Classroom, I-Ready, and Canvas to deliver tests and assignments and to grade papers." Terrific, especially for Google, which was slated to sell 600 Chromebooks to the Jericho school district, and which since 2020 has sold nearly $14bn worth of the cheap laptops to K-12 schools and universities.

If only Winch and his colleagues had attended the Teachers College symposium that presented the Froud study last September. The star panelist was the nation's leading expert on reading and the brain, John Gabrieli, an MIT neuroscientist who is skeptical about the promises of big tech and its salesmen: "I am impressed how educational technology has had no effect on scale, on reading outcomes, on reading difficulties, on equity issues," he told the New York audience. "How is it that none of it has lifted, on any scale, reading? ... It's like people just say, "Here is a product. If you can get it into a thousand classrooms, we'll make a bunch of money.' And that's OK; that's our system. We just have to evaluate which technology is helping people, and then promote that technology over the marketing of technology that has made no difference on behalf of students ... It's all been product and not purpose." I'll only take issue with the notion that it's "OK" to rob kids of their full intellectual potential in the service of sales -- before they even get started understanding what it means to think, let alone read.

Cellphones

Why a School Principal Switched from Smartphones to Flip Phones (msn.com) 90

Last week's story about a reporter switching to a flip phone was just part of a trend, argues a Chicago school principal who did the same thing.

"I do not feel punished. I feel free." Teachers said they could sense kids' phones distracting them from inside their pockets. We banned phones outright, equipping classrooms with lockboxes that the kids call "cellphone prisons." It's not perfect, but it's better. A teacher said, "It's like we have the children back...."

And what about adults? Ninety-five percent of young adults now keep their phones nearby every waking hour, according to a Gallup survey; 92% do when they sleep. We look at our phones an average of 352 times a day, according to one recent survey, almost four times more often than before COVID. We want children off their phones because we want them to be present, but children need our presence, too. When we are on our phones, we are somewhere else. As the title of one study notes, "The Mere Presence of One's Own Smartphone Reduces Available Cognitive Capacity...."

I made my screen gray. I deleted social media. I bought a lockbox and said I would keep my phone there. I didn't... Every year, I see kids get phones and disappear into them. I don't want that to happen to mine. I don't want that to have happened to me. So I quit. And now I have this flip phone.

What I don't have is Facetime or Instagram. I can't use Grubhub or Lyft or the Starbucks Mobile App. I don't even have a browser. I drove to a student's quinceañera, and I had to print out directions as if it were 2002... I can still make calls, though people are startled to get one. I can still text. And I can still see your pictures, though I can "heart" them only in my heart. The magic of smartphones is that they eliminate friction: touchscreens, auto-playing videos, endless scrolling. My phone isn't smooth.

That breaks the spell. Turning off my smartphone didn't fix all my problems. But I do notice my brain moving more deliberately, shifting less abruptly between moods. I am bored more, sure — the days feel longer — but I am deciding that's a good thing. And I am still connected to the people I love; they just can't text me TikToks...

I'm not doing this to change the culture. I'm doing this because I don't want my sons to remember me lost in my phone.

Robotics

The Global Project To Make a General Robotic Brain (ieee.org) 23

Generative AI "doesn't easily carry over into robotics," write two researchers in IEEE Spectrum, "because the Internet is not full of robotic-interaction data in the same way that it's full of text and images."

That's why they're working on a single deep neural network capable of piloting many different types of robots... Robots need robot data to learn from, and this data is typically created slowly and tediously by researchers in laboratory environments for very specific tasks... The most impressive results typically only work in a single laboratory, on a single robot, and often involve only a handful of behaviors... [W]hat if we were to pool together the experiences of many robots, so a new robot could learn from all of them at once? We decided to give it a try. In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality...

The question is whether a deep neural network trained on data from a sufficiently large number of different robots can learn to "drive" all of them — even robots with very different appearances, physical properties, and capabilities. If so, this approach could potentially unlock the power of large datasets for robotic learning. The scale of this project is very large because it has to be. The RT-X dataset currently contains nearly a million robotic trials for 22 types of robots, including many of the most commonly used robotic arms on the market...

Surprisingly, we found that our multirobot data could be used with relatively simple machine-learning methods, provided that we follow the recipe of using large neural-network models with large datasets. Leveraging the same kinds of models used in current LLMs like ChatGPT, we were able to train robot-control algorithms that do not require any special features for cross-embodiment. Much like a person can drive a car or ride a bicycle using the same brain, a model trained on the RT-X dataset can simply recognize what kind of robot it's controlling from what it sees in the robot's own camera observations. If the robot's camera sees a UR10 industrial arm, the model sends commands appropriate to a UR10. If the model instead sees a low-cost WidowX hobbyist arm, the model moves it accordingly.

"To test the capabilities of our model, five of the laboratories involved in the RT-X collaboration each tested it in a head-to-head comparison against the best control system they had developed independently for their own robot... Remarkably, the single unified model provided improved performance over each laboratory's own best method, succeeding at the tasks about 50 percent more often on average." And they then used a pre-existing vision-language model to successfully add the ability to output robot actions in response to image-based prompts.

"The RT-X project shows what is possible when the robot-learning community acts together... and we hope that RT-X will grow into a collaborative effort to develop data standards, reusable models, and new techniques and algorithms."

Thanks to long-time Slashdot reader Futurepower(R) for sharing the article.
AI

Bill Gates Interviews Sam Altman, Who Predicts Fastest Tech Revolution 'By Far' (gatesnotes.com) 106

This week on his podcast Bill Gates asked Sam Altman how his team is doing after his (temporary) ouster, Altman replies "a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that's like a silver lining of all of this. In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us."

The rest of their conversation was pre-ouster — but gave fascinating glimpses at the possible future of AI — including the prospect of very speedy improvements. Altman suggests it will be easier to understand how a creative work gets "encoded" in an AI than it would be in a human brain. "There has been some very good work on interpretability, and I think there will be more over time... The little bits we do understand have, as you'd expect, been very helpful in improving these things. We're all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast...." BILL GATES: I'm pretty sure, within the next five years, we'll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we're able to do today.

SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what's going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.

BILL GATES: Yes, in physics, biology, it's sometimes just messing around, and it's like, whoa — how does this actually come together...? When you look at the next two years, what do you think some of the key milestones will be?

SAM ALTMAN: Multimodality will definitely be important.

BILL GATES: Which means speech in, speech out?

SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that.... [B]ut maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn't always know which one, and you'd like to get the best response of 10,000 each time, and so that increase in reliability will be important.

Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We'll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.

Areas where Altman sees potential are healthcare, education, and especially computer programming. "If you make a programmer three times more effective, it's not just that they can do three times more stuff, it's that they can — at that higher level of abstraction, using more of their brainpower — they can now think of totally different things. It's like, going from punch cards to higher level languages didn't just let us program a little faster — it let us do these qualitatively new things. And we're really seeing that...

"I think it's worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be."

He predicts the fastest technology revolution "by far," worrying about "the speed with which society is going to have to adapt, and that the labor market will change." But soon he adds that "We started investing a little bit in robotics companies. On the physical hardware side, there's finally, for the first time that I've ever seen, really exciting new platforms being built there."

And at some point Altman tells Gates he's optimistic that AI could contribute to helping humans get along with each other.
Education

The Billionaires Spending a Fortune To Lure Scientists Away From Universities (nytimes.com) 77

An anonymous reader quotes a report from the New York Times: In an unmarked laboratory stationed between the campuses of Harvard and the Massachusetts Institute of Technology, a splinter group of scientists is hunting for the next billion-dollar drug. The group, bankrolled with $500 million from some of the wealthiest families in American business, has created a stir in the world of academia by dangling seven-figure paydays to lure highly credentialed university professors to a for-profit bounty hunt. Its self-described goal: to avoid the blockages and paperwork that slow down the traditional paths of scientific research at universities and pharmaceutical companies, and discover scores of new drugs (at first, for cancer and brain disease) that can be produced and sold quickly.

Braggadocio from start-ups is de rigueur, and plenty of ex-academics have started biotechnology companies, hoping to strike it rich on their one big discovery. This group, rather boastfully named Arena BioWorks, borrowing from a Teddy Roosevelt quote, doesn't have one singular idea, but it does have a big checkbook. "I'm not apologetic about being a capitalist, and that motivation from a team is not a bad thing," said the technology magnate Michael Dell, one of the group's big-money backers. Others include an heiress to the Subway sandwich fortune and an owner of the Boston Celtics. The wrinkle is that for decades, many drug discoveries have not just originated at colleges and universities, but also produced profits that helped fill their endowment coffers. The University of Pennsylvania, for one, has said it earned hundreds of millions of dollars for research into mRNA vaccines used against Covid-19. Under this model, any such windfall would remain private. [...]

The five billionaires backing Arena include Michael Chambers, a manufacturing titan and the wealthiest man in North Dakota, and Elisabeth DeLuca, the widow of a founder of the Subway chain. They have each put in $100 million and expect to double or triple their investment in later rounds. In confidential materials provided to investors and others, Arena describes itself as "a privately funded, fully independent, public good." Arena's backers said in interviews that they did not intend to entirely cut off their giving to universities. Duke turned down an offer from Mr. Pagliuca, an alumnus and board member, to set up part of the lab there. Mr. Dell, a major donor to the University of Texas hospital system in his hometown, Austin, leased space for a second Arena laboratory there. [Stuart Schreiber, a longtime Harvard-affiliated researcher who quit to be Arenaâ(TM)s lead scientist] said it would require years -- and billions of dollars in additional funding -- before the team would learn whether its model led to the production of any worthy drugs. "Is it going to be better or worse?" Dr. Schreiber said. "I don't know, but it's worth a shot."

It's funny.  Laugh.

AI-Generated George Carlin Drops Comedy Special (variety.com) 128

Michaela Zee reports via Variety: More than 15 years after his death, stand-up comedian George Carlin has been brought back to life in an artificial intelligence-generated special called "George Carlin: I'm Glad I'm Dead." The hour-long special, which dropped on Tuesday, comes from Dudesy, a comedy AI that hosts a podcast and YouTube show with "Mad TV" alum Will Sasso and podcaster Chad Kultgen.

"I just want to let you know very clearly that what you're about to hear is not George Carlin. It's my impersonation of George Carlin that I developed in the exact same way a human impressionist would," Dudesy said at the beginning of the special. "I listened to all of George Carlin's material and did my best to imitate his voice, cadence and attitude as well as the subject matter I think would have interested him today. So think of it like Andy Kaufman impersonating Elvis or like Will Ferrell impersonating George W. Bush."

In the stand-up special, the AI-generated impression of Carlin, who died in 2008 of heart failure, tackled prevalent topics like mass shootings, the American class system, streaming services, social media and AI itself. "There's one line of work that is most threatened by AI -- one job that is most likely to be completely erased because of artificial intelligence: stand-up comedy," AI-generated Carlin said. "I know what all the stand-up comics across the globe are saying right now: "I'm an artist and my art form is too creative, too nuanced, too subtle to be replicated by a machine. No computer program can tell a fart joke as good as me.'"
Kelly Carlin, the late stand-up comedian's daughter, posted a statement in response to the special: "My dad spent a lifetime perfecting his craft from his very human life, brain and imagination. No machine will ever replace his genius. These AI generated products are clever attempts at trying to recreate a mind that will never exist again. Let's let the artist's work speak for itself. Humans are so afraid of the void that we can't let what has fallen into it stay there.

Here's an idea, how about we give some actual living human comedians a listen to? But if you want to listen to the genuine George Carlin, he has 14 specials that you can find anywhere."
Medicine

New 'MindEar' App Can Reduce Debilitating Impact of Tinnitus, Say Researchers 50

Researchers have designed an app to reduce the impact of tinnitus, an often debilitating condition that manifests via a ringing sound or perpetual buzzing. The Guardian reports: While there is no cure, there are a number of ways of managing the condition, including cognitive behavioural therapy (CBT). This helps people to reduce their emotional connection to the sound, allowing the brain to learn to tune it out. However, CBT can be expensive and difficult for people to access. Researchers have created an app, called MindEar, that provides CBT through a chatbot with other approaches such as sound therapy. "What we want to do is empower people to regain control," said Dr Fabrice Bardy, the first author of the study from the University of Auckland -- who has tinnitus.

Writing in the journal Frontiers in Audiology and Otology, Bardy and colleagues report how 28 people completed the study, 14 of whom were asked to use the app's virtual coach for 10 minutes a day for eight weeks. The other 14 participants were given similar instructions with four half-hour video calls with a clinical psychologist. The participants completed online questionnaires before the study and after the eight-week period. The results reveal six participants given the app alone, and nine who were also given video calls, showed a clinically significant decrease in the distress caused by tinnitus, with the extent of the benefit similar for both groups. After a further eight weeks, a total of nine participants in both groups reported such improvements.
Science

Scientists Discover 100 To 1000 Times More Plastics In Bottled Water (washingtonpost.com) 204

An anonymous reader quotes a report from the Washington Post: People are swallowing hundreds of thousands of microscopic pieces of plastic each time they drink a liter of bottled water, scientists have shown -- a revelation that could have profound implications for human health. A new paper released Monday in the Proceedings of the National Academy of Sciences found about 240,000 particles in the average liter of bottled water, most of which were "nanoplastics" -- particles measuring less than one micrometer (less than one-seventieth the width of a human hair). [...]

The typical methods for finding microplastics can't be easily applied to finding even smaller particles, but Min co-invented a method that involves aiming two lasers at a sample and observing the resonance of different molecules. Using machine learning, the group was able to identify seven types of plastic molecules in a sample of three types of bottled water. [...] The new study found pieces of PET (polyethylene terephthalate), which is what most plastic water bottles are made of, and polyamide, a type of plastic that is present in water filters. The researchers hypothesized that this means plastic is getting into the water both from the bottle and from the filtration process.

Researchers don't yet know how dangerous tiny plastics are for human health. In a large review published in 2019, the World Health Organization said there wasn't enough firm evidence linking microplastics in water to human health, but described an urgent need for further research. In theory, nanoplastics are small enough to make it into a person's blood, liver and brain. And nanoplastics are likely to appear in much larger quantities than microplastics -- in the new research, 90 percent of the plastic particles found in the sample were nanoplastics, and only 10 percent were larger microplastics. Finding a connection between microplastics and health problems in humans is complicated -- there are thousands of types of plastics, and over 10,000 chemicals used to manufacture them. But at a certain point, [...] policymakers and the public need to prepare for the possibility that the tiny plastics in the air we breathe, the water we drink and the clothes we wear have serious and dangerous effects.
"You still have a lot of people that, because of marketing, are convinced that bottled water is better," said Sherri Mason, a professor and director of sustainability at Penn State Behrend in Erie. "But this is what you're drinking in addition to that H2O."
Cellphones

Will Switching to a Flip Phone Fight Smartphone Addiction? (omanobserver.om) 152

"This December, I made a radical change," writes a New York Times tech reporter — ditching their $1,300 iPhone 15 for a $108 flip phone.

"It makes phone calls and texts and that was about it. It didn't even have Snake on it..." The decision to "upgrade" to the Journey was apparently so preposterous that my carrier wouldn't allow me to do it over the phone.... Texting anything longer than two sentences involved an excruciating amount of button pushing, so I started to call people instead. This was a problem because most people don't want their phone to function as a phone... [Most voicemails] were never acknowledged. It was nearly as reliable a method of communication as putting a message in a bottle and throwing it out to sea...

My black clamshell of a phone had the effect of a clerical collar, inducing people to confess their screen time sins to me. They hated that they looked at their phone so much around their children, that they watched TikTok at night instead of sleeping, that they looked at it while they were driving, that they started and ended their days with it. In a 2021 Pew Research survey, 31 percent of adults reported being "almost constantly online" — a feat possible only because of the existence of the smartphone.

This was the most striking aspect of switching to the flip. It meant the digital universe and its infinite pleasures, efficiencies and annoyances were confined to my computer. That was the source of people's skepticism: They thought I wouldn't be able to function without Uber, not to mention the world's knowledge, at my beck and call. (I grew up in the '90s. It wasn't that bad...

"Do you feel less well-informed?" one colleague asked. Not really. Information made its way to me, just slightly less instantly. My computer still offered news sites, newsletters and social media rubbernecking.

There were disadvantages — and not just living without Google Maps. ("I've got an electric vehicle, and upon pulling into a public charger, low on miles, realized that I could not log into the charger without a smartphone app... I received a robot vacuum for Christmas ... which could only be set up with an iPhone app.") Two-factor authentication was impossible.

But "Despite these challenges, I survived, even thrived during the month. It was a relief to unplug my brain from the internet on a regular basis and for hours at a time. I read four books... I felt that I had more time, and more control over what to do with it... my sleep improved dramatically."

"I do plan to return to my iPhone in 2024, but in grayscale and with more mindfulness about how I use it."
Math

There's a Big Difference In How Your Brain Processes the Numbers 4 and 5 (sciencealert.com) 81

Longtime Slashdot reader fahrbot-bot shares a report from ScienceAlert: According to a new study [published in Nature Human Behavior], the human brain has two separate ways of processing numbers of things: one system for quantities of four or fewer, and another system for five and up. Presented with four or fewer objects, humans can usually identify the sum at first glance, without counting. And we're almost always right. This ability is known as "subitizing," a term coined by psychologists last century, and it's different from both counting and estimating. It refers to an uncanny sense of immediately knowing how many things you're looking at, with no tallying or guessing required.

While we can easily subitize quantities up to four, however, the ability disappears when we're looking at five or more things. If asked to instantly quantify a group of seven apples, for example, we tend to hesitate and estimate, taking slightly longer to respond and still providing less precise answers. Since our subitizing skills vanish so abruptly for quantities larger than four, some researchers have suspected our brains use two distinct processing methods, specialized for either small or large quantities. "However, this idea has been disputed up to now," says co-author Florian Mormann, a cognitive neurophysiologist from the Department of Epileptology at the University Hospital Bonn. "It could also be that our brain always makes an estimate but the error rates for smaller numbers of things are so low that they simply go unnoticed."

Previous research involving some of the new study's authors showed that human brains have neurons responsible for each number, with certain nerve cells firing selectively in response to certain quantities. Some neurons fire mainly when a person sees two of something, they found, while others show a similar affinity for their own number of visual elements. Yet many of these neurons also fire in response to slightly smaller or larger numbers, the researchers note, with a weaker reaction for quantities further removed from their numerical focus. "A brain cell for a number of 'seven' elements thus also fires for six and eight elements but more weakly," says neurobiologist Andreas Nieder from the University of Tubingen. "The same cell is still activated but even less so for five or nine elements."

This kind of "numerical distance effect" also occurs in monkeys, as Nieder has shown in previous research. Among humans, however, it typically happens only when we see five or more things, hinting at some undiscovered difference in the way we identify smaller numbers. "There seems to be an additional mechanism for numbers of around less than five elements that makes these neurons more precise," Nieder says. Neurons responsible for lower numbers are able to inhibit other neurons responsible for adjacent numbers, the study's authors report, thus limiting any mixed signals about the quantity in question. When a trio-specializing neuron fires, for example, it also inhibits the neurons that typically fire in response to groups of two or four things. Neurons for the number five and beyond apparently lack this mechanism.

Science

Novel Helmet Liner 30 Times Better At Stopping Concussions (newatlas.com) 50

An anonymous reader quotes a report from New Atlas: Researchers have developed a new, lightweight foam made from carbon nanotubes that, when used as a helmet liner, absorbed the kinetic energy caused by an impact almost 30 times better than liners currently used in US military helmets. The foam could prevent or significantly reduce the likelihood of concussion in military personnel and sportspeople. Among sportspeople and military vets, traumatic brain injury (TBI) is one of the major causes of permanent disability and death. Injury statistics show that the majority of TBIs, of which concussion is a subtype, are associated with oblique impacts, which subject the brain to a combination of linear and rotational kinetic energy forces and cause shearing of the delicate brain tissue.

To improve their effectiveness, helmets worn by military personnel and sportspeople must employ a liner material that limits both. This is where researchers from the University of Wisconsin-Madison come in. Determined to prevent -- or lessen the effect of -- TBIs caused by knocks to the body and head, they've developed a new lightweight foam material for use as a helmet liner. For the current study, Thevamaran built upon his previous research into vertically aligned carbon nanotube (VACNT) foams -- carefully arranged layers of carbon cylinders one atom thick -- and their exceptional shock-absorbing capabilities. Current helmets attempt to reduce rotational motion by allowing a sliding motion between the wearer's head and the helmet during impact. However, the researchers say this movement doesn't dissipate energy in shear and can jam when severely compressed following a blow. Instead, their novel foam doesn't rely on sliding layers.

VACNT foam sidesteps this shortcoming via its unique deformation mechanism. Under compression, the VACNTs undergo collective sequentially progressive buckling, from increased compliance at low shear strain levels to a stiffening response at high strain levels. The formed compression buckles unfold completely, enabling the VACNT foam to accommodate large shear strains before returning to a near initial state when the load is removed. The researchers found that at 25% precompression, the foam exhibited almost 30 times higher energy dissipation in shear -- up to 50% shear strain -- than polyurethane-based elastomeric foams of similar density.
The study has been published in the journal Experimental Mechanics.
Medicine

Vibrating Pill May Give Dieters a Feeling of Fullness, Study Suggests (theguardian.com) 56

Scientists have developed a vibrating pill that, when swallowed before eating, can create a feeling of fullness. The Guardian reports: The research, which has yet to be carried out in humans, shows that after 30 minutes of activity by the Vibes pill, pigs ate on average almost 40% less food in the following half hour than they did without the device, and gained weight more slowly. The Vibes name is an acronym derived from the pill's full title -- Vibrating Ingestible BioElectronic Stimulator. The work in pigs suggests the vibrations activate stretch receptors in the stomach, simulating the presence of food. This results in signals being sent to the hypothalamus in the brain via the vagus nerve, increasing levels of various hormones that give rise to a feeling of fullness and decreasing those that result in feelings of hunger.

"We envision the Vibes pill being ingested on a relatively empty stomach 20 to 30 min before anticipated meals to trigger the desired sensation of satiety early in the meal,â the team write, adding that when produced at scale, the cost of the pills is expected to be in the cents to one dollar range. The vibrations, which are powered by a battery encased in the swallowed capsule, can be triggered when stomach acid dissolves a membrane around the pill, or by a timer. The researchers say the pills, which are about the size of a large vitamin tablet, offer a non-invasive, temporary therapy, without the need for weight-loss surgery, and exit the body with other solid waste -- meaning in humans they are flushed down the toilet. However they suggest it could be possible to develop pills that are implanted, or stay in the stomach, to reduce the need for people to repeatedly take them, should they require continuing therapy.
Further reading: Man Reports PillCam Stuck In His Gut For Over 12 Weeks
AI

New AI Transistor Works Just Like the Human Brain (studyfinds.org) 44

Longtime Slashdot reader FudRucker quotes a report from Study Finds: Researchers from Northwestern University, Boston College, and the Massachusetts Institute of Technology (MIT) have developed a new synaptic transistor that works just like the human brain. This advanced device, capable of both processing and storing information simultaneously, marks a notable shift from traditional machine-learning tasks to performing associative learning -- similar to higher-level human cognition. This study introduces a device that operates effectively at room temperatures, a notable improvement over previous brain-like computing devices that required extremely cold conditions to keep their circuits from overheating. With its fast operation, low energy consumption, and ability to retain information without power, the new transistor is well-suited for real-world applications.

"The brain has a fundamentally different architecture than a digital computer," says study co-author Mark Hersam, the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern's McCormick School of Engineering, in a university release. "In a digital computer, data move back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain."

Hersam and his team employed a novel strategy involving moire patterns, a type of geometric design formed when two patterns are overlaid. By stacking two-dimensional materials like bilayer graphene and hexagonal boron nitride and twisting them to form a moire pattern, they could manipulate the electronic properties of the graphene layers. This manipulation allowed for the creation of a synaptic transistor with enhanced neuromorphic functionality at room temperature. The device's testing involved training it to recognize patterns and similarities, a form of associative learning. For instance, if trained to identify a pattern like "000," the transistor could distinguish that "111" is more similar to "000" than "101," demonstrating a higher level of cognitive function. This ability to process complex and imperfect inputs has significant implications for real-world AI applications, such as improving the reliability of self-driving vehicles in challenging conditions.
The study has been published in the journal Nature.
Medicine

California Workers Say Herbicide Is Giving Them Parkinson's (latimes.com) 43

An anonymous reader quotes a report from the Los Angeles Times: It was the late 1980s when Gary Mund felt his pinky tremble. At first it seemed like a random occurrence, but pretty quickly he realized something was seriously wrong. Within two years, Mund -- a crew worker with the Eastern Municipal Water District in Riverside County -- was diagnosed with Parkinson's disease. The illness would eventually consume much of his life, clouding his speech, zapping most of his motor skills and taking away his ability to work and drive. "It sucks," said Mund, 69. He speaks tersely, because every word is a hard-won battle. "I was told the herbicide wouldn't hurt you."

The herbicide is paraquat, an extremely powerful weed killer that Mund sprayed on vegetation as part of his job from about 1980 to 1985. Mund contends the product is responsible for his disease, but the manufacturer denies there is a causal link between the chemical and Parkinson's. Paraquat is manufactured by Syngenta, a Swiss-based company owned by the Chinese government. The chemical is banned in at least 58 countries -- including China and Switzerland -- due to its toxicity, yet it continues to be a popular herbicide in California and other parts of the United States. But research suggests the chemical may cross the blood-brain barrier in a manner that triggers Parkinson's disease, a progressive, neurodegenerative disorder that affects movement. Now, Mund is among thousands of workers suing Syngenta seeking damages and hoping to see the chemical banned.

Since 2017, more than 3,600 lawsuits have been filed in state and federal courts seeking damages from exposure to paraquat products, according to Syngenta's 2022 financial report (PDF). [...] Paraquat is 28 times more toxic than another controversial herbicide, Roundup, according to a report from the Pesticide Action Network. (Roundup has been banned in several parts of California, including a 2019 moratorium by the Los Angeles County Board of Supervisors forbidding its use by county departments.) Paraquat also has other known health effects. It is listed as "highly toxic" on the U.S. Environmental Protection Agency's website, which says that "one small sip can be fatal and there is no antidote." The EPA is currently reviewing paraquat's approval status. However, both the EPA and Syngenta cited a 2020 U.S. government Agricultural Health Study that found there is no clear link between paraquat exposure and Parkinson's disease. A 2021 review of reviews similarly found that there is no causal relationship.

Christmas Cheer

Amazon, Etsy, Launch Categories With 'Gifts For Programmers' (thenewstack.io) 20

Long-time Slashdot reader destinyland writes: It's a question that comes up all the time on Reddit. Etsy even created a special page for programmer-themed gift suggestions (showing more than 5,000 results). While CNET sticks to broader lists of "tech gifts" — and a separate list for "Star Wars gifts" — other sites around the web have been specifically honing in on programmer-specific suggestions. (Blue light-blocking glasses... A giant rubber duck... The world's strongest coffee... A printer that transfers digital images onto cheese...)

So while in years past Amazon has said they laughed at customer reviews for cans of uranium, this year Amazon has now added a special section that's entirely dedicated to Gifts for Computer Programmers, according to this funny rundown of 2023's "Gifts for Programmers" (that ends up recommending ChatGPT gift cards and backyard office sheds):

From the article: [Amazon's Gifts for Programmers section] shows over 3,000 results, with geek-friendly subcategories like "Glassware & Drinkware" and "Novelty Clothing"... For the coder in your life, Amazon offers everything from brainteasing programming puzzles to computerthemed jigsaw puzzles. Of course, there's also a wide selection of obligatory funny tshirts... But this year there's also tech-themed ties and motherboard-patterned socks...

Some programmers, though, might prefer a gift that's both fun and educational. And what's more entertaining than using your Python skills to program a toy robot dog...? But if you're shopping for someone who's more of a cat person, Petoi sells a kit for building a programmable (and open source) cat robot named "Nybble". The sophisticated Arduino-powered feline can be programmed with Python and C++ (as well as block-based coding)... [part of] the new community that's building around "OpenCat", the company's own quadruped robotic pet framework (open sourced on GitHub).

Science

Human Brain Cells Hooked Up To a Chip Can Do Speech Recognition (technologyreview.com) 56

An anonymous reader quotes a report from MIT Technology Review: Brain organoids, clumps of human brain cells grown in a dish, can be hooked up to an electronic chip and carry out simple computational tasks, a new study shows. Feng Guo and his team at Indiana University Bloomington generated a brain organoid from stem cells, attached it to a computer chip, and connected their setup, known as Brainoware, to an AI tool. They found that this hybrid system could process, learn, and remember information. It was even able to carry out some rudimentary speech recognition. The work, published today in Nature Electronics, could one day lead to new kinds of bio-computers that are more efficient than conventional computers.

"This is a first demonstration of using brain organoids [for computing]," says Guo. "It's exciting to see the possibilities of organoids for biocomputing in the future." With Brainoware, Guo aimed to use actual brain cells to send and receive information. When the researchers applied electrical stimulation to the hybrid system they'd built, Brainoware responded to those signals, and changes occurred in its neural networks. According to the researchers, this result suggests that the hybrid system did process information, and could perhaps even perform computing tasks without supervision. Guo and his colleagues then attempted to see if Brainoware could perform any useful tasks. In one test, they used Brainoware to try to solve mathematical equations. They also gave it a benchmark test for speech recognition, using 240 audio clips of eight people pronouncing Japanese vowels. The clips were converted into electrical signals and applied to the Brainoware system. This generated signals in the neural networks of the brain organoid, which were then fed into an AI tool for decoding.

The researchers found that the brain organoid -- AI system could decode the signals from the audio recordings, which is a form of speech recognition, says Guo. "But the accuracy was low," he says. Although the system improved with training, reaching an accuracy of about 78%, it was still less accurate than artificial neural networks, according to the study. Lena Smirnova, an assistant professor of public health at Johns Hopkins University, points out that brain organoids do not have the ability to truly hear speech but simply exhibit "a reaction" to pulses of electrical stimulation from the audio clips. And the study did not demonstrate whether Brainoware can process and store information over the long term or learn multiple tasks. Generating brain cell cultures in a lab and maintaining them long enough to perform computations is also a huge undertaking. Still, she adds, "it's a really good demonstration that shows the capabilities of brain organoids."

Slashdot Top Deals