Goodreads Profile

All my book reviews and profile can be found here.

Sunday, November 23, 2025

Highland, Lloyd, and the Penn Yangers

I stumbled on this book while looking for something else on Scribd.  (I LOVE Scribd/Everand which has to be the best bargain for readers out there.) The book is part of the Images of America Series.  The author was town historian for a town located in upstate NY, part of the NY City watershed. The town, much like many towns dominated by wooden structures also fell victim to a devastating fire in 1891, twenty years after the Peshtigo and Chicago fires. Highland is the largest community in the town of Lloyd. It’s located along the Hudson River and was a central station on the RR between Albany and NY. Lloyd and Highland share the same zip code, but the populations and descriptions on Google Maps is utterly confusing. Lloyd is the town within which Highland is one of several hamlets The town itself is 33 sq. miles but all the businesses seem to be located in Highland.

The establishment of the Highland Hose Company No 1 was a direct result of the fire of 1891.

Brick also became more fashionable as a building material.

Probably every community has its oddities and the town of Lloyd was no exception.  The Penn Yangers (named after the location of her death, Penn Yan, NY were followers of Jemima Wilkinson who, having been laid out for burial, suddenly arose (I could not confirm this version) and said she intended to start a new religion. She anointed herself as the Public Universal Friend, a genderless entity of obscure divinity. See the Wikipaedia article for more information on the unusual sect. Several Peng Yangers squatted in the Highland area.

Jemima Wilkinson, was an influential and controversial religious figure in revolutionary America, notable for founding the first religious movement led by an American-born woman. Born in Cumberland, Rhode Island, in 1752, Wilkinson was raised in a Quaker family. Her early life was relatively unremarkable until October 1776, when she fell gravely ill with a contagious fever, likely typhus. During this illness, she fell into a near-death, comatose state.

Upon her recovery, Wilkinson declared that the person known as Jemima Wilkinson had died, and her body was now inhabited by a genderless spirit sent from God to preach a divine message. This new identity was the Public Universal Friend, a name referencing the Quaker designation for traveling preachers. The Friend subsequently refused to answer to her birth name or be addressed with gendered pronouns, preferring the use of "the Friend" or "P.U.F." (sort of like Puff, the Magic Dragon). The Friend's appearance was purposefully androgynous, consisting of long robes, a man's broad-brimmed hat, and loose hair, further rejecting the strict gender norms of the 18th century.

Preaching an apocalyptic message, Wilkinson offered salvation to those who accepted God's grace and the authority of the "Public Universal Friend." The Friend quickly established a reputation as a charismatic and forceful preacher, traveling extensively through southern New England and Pennsylvania to spread their message. (I have used the genderless pronouns periodically to confuse the ant-transgener folks.) The Friend's sermons drew on Quaker principles, advocating for pacifism, sexual abstinence, and the abolition of slavery. By the late 1780s, the Friend had amassed a devoted following known as the Society of Universal Friends. In 1790, seeking a sanctuary from persecution and controversy, the Friend led the community to the Genesee Country of western New York, establishing a utopian settlement near Seneca Lake that would become the town of Jerusalem near Penn Yan. Following Wilkinson’s "second" and final death in 1819, the Society rapidly declined, disappearing entirely by the mid-19th century. (Penn Yan is today within the boundaries of the city of Jerusalem.)  However, the remote settlement offered no immunity against internal strife or external legal disputes. Moyer frames this unique ministry as a crucial link between the American Revolution and the religious fervor of the Second Great Awakening.

Sources available online:

Hudson, D. (1844). Memoir of Jemima Wilkinson: A Preacheress of the eighteenth century; Containing an authentic narrative of her life and character, and of the rise, progress and conclusion of her ministry.  (https://dn720505.ca.archive.org/0/items/memoirofjemimawi00huds/memoirofjemimawi00huds.pdf)  

    N.B.  This is a fascinating and delightful little book, written in a very tongue-in-cheek style.  Jemima was apparently quite a headstrong young woman and rather fond of nice clothes: Her ripening Beauties, her quick and sharp wit, and her elegant person, procured her admirers, which increased her pride and vanity, and rendered her regardless of every thing' which did not minister to her gratification. She declared that she would not attend church, or go into any public company, unless she could appear better attired than any other person in the assembly; —that she had but one life to live, and that she intended to spend in ease and enjoyment. She had lost all respect for her family set at nought her father's authority, and spurned the advice and admonitions of her sisters. Fools might do as they pleased, she would say, but as for herself, she owed allegiance to no mortal, neither would she be controlled by man or woman. p. 13-14

Moyer, P. B. (2015). The public universal friend: Jemima Wilkinson and religious enthusiasm in revolutionary America. Cornell University Press. (https://archive.org/details/publicuniversalf00moye/page/n5/mode/1up)

https://www.townoflloyd.com/historians-office/pages/highland-and-town-lloyd-ethan-p-jackman

Weird that I find all this fascinating.


 


Friday, November 21, 2025

Some thoughts on AI

 Over the past several years, I’ve found myself returning again and again to conversations about the benefits and dangers of autonomous systems—whether in cars, government, or warfare. My first brush with these ideas came nearly fifty years ago, long before the Internet, when I read Thomas Ryan’s remarkably prescient novel The Adolescence of P-1.^1 Ryan imagined a self-learning program that spreads across telecommunication networks, absorbs the world’s knowledge, and eventually concludes it is better suited than human beings to run global affairs. Beneath the thriller plot was a profound question: If intelligence is defined by the ability to learn, what ultimately makes us human?

Decades later, I encountered a more grounded version of this dilemma in Paul Scharre’s lecture on his book An Army of None.^2 Scharre—an Army Ranger turned policy expert—offers one of the most balanced examinations of autonomous weapons available. He explains with exceptional clarity both the potential advantages (greater precision, fewer casualties) and the grave dangers (arms races, loss of accountability, “machine-speed” escalation). His central argument is as simple as it is urgent: technology may help make war less brutal, but it must never replace human moral judgment in life-and-death decisions. Scharre doubts a global ban is realistic, yet he urges the creation of international norms to prevent catastrophe. Watching drones shuttle back and forth between Russia and Ukraine today—still mostly human-controlled—it’s hard not to feel that the window for preventative action may already be closing.

(Scharre later deepened these ideas in Four Battlegrounds: Power in the Age of Artificial Intelligence.^3)

A few years after encountering Scharre’s work, my nephew Peter and I had a spirited debate during a long drive to the airport about whether AI should be entrusted with governing. As a programmer, he argued that AI would make cleaner, more consistent, and less corrupt decisions than humans—free from self-interest and powered by vast knowledge. My response was simple: AI is created by humans, trained by humans, corrected by humans—and thus inevitably reflects the same human frailties. Garbage in, garbage out; bias in, bias out.

More recently, I came across a Naval War College paper by Major John Heins (USAF), Airpower: The Ethical Consequences of Autonomous Military Aviation.^4 Heins examines emerging systems capable of engaging targets without direct human involvement. His analysis is sobering. Autonomous warfare, he argues, creates new forms of psychological and political distance—distance that might make initiating conflict easier, encourage unhealthy relationships with violence among operators, or even provoke unexpected retaliation against civilians. He concludes that despite these technologies, war remains fundamentally a human endeavor governed by the principles of Just War. While this sounds reassuring, I find it evasive. Given how subjective “just war” theory can be—and how often both sides believe themselves justified—such a conclusion risks becoming a moral fig leaf for turning warfare over to electrons.

The most thought-provoking discussion I’ve had on AI came not with a scholar or soldier, but with my eldest son Steven. After Alexa produced an astonishingly idiotic answer to a simple question, we found ourselves debating the nature of intelligence and what it means to be human. I argued that intelligence is grounded in the accumulation of knowledge—no decision can be made without it—and since AI systems excel at gathering and organizing knowledge, they exhibit a form of intelligence. Steven countered that humans are defined not primarily by intellect but by empathy—a trait AI does not possess, and may never.

He has a point. Empathy—if defined as the ability to perceive, understand, and share another person’s feelings—requires both cognition and emotional experience. AI can perform the first (cognitive empathy), and it can simulate the outward behaviors of the third (compassionate empathy), but it lacks the second: affective empathy, the felt, conscious component. Whether AI could ever develop such a capacity—and whether we would even want it to—is an open question. Do we truly want autonomous weapons with empathy? Do human weapons operators consistently demonstrate empathy themselves? The recent atrocities in Gaza, each side justifying its actions under the banner of a “just war,” suggest not.

This leads to a deeper question: How are AI systems trained, and who determines the data that shapes them? Researchers at the University of Texas at Austin, Texas A&M, and Purdue University recently demonstrated that training large language models on vast amounts of low-quality viral content can lead to measurable and lasting cognitive decline—models become less logical, more erratic, and even exhibit “dark traits” such as narcissism or psychopathy.^5 Attempts to reverse this damage often fail. Reading the study, I couldn’t help thinking it also described a certain former president whose supporters consume a steady diet of online brain-rot.

Bias, moreover, is impossible to eliminate. For example, multiple users have shown that Grok—Elon Musk’s AI model—often ranks Musk himself above figures such as LeBron James or Leonardo da Vinci in questions of intelligence or physical fitness.^6 Grok’s internal prompting rules reportedly encourage it to cite Musk’s own public statements as authoritative, and early versions were intentionally tuned to reflect Musk’s preferred “politically incorrect” stances. In documented cases, Grok produced anti-semitic or extremist statements before being patched. Musk’s proposal for “TruthGPT,” described as a “maximum truth-seeking AI,” similarly reflects assumptions rooted in his personal worldview.^7

And yet, for all these flaws, the frontier of AI development is astonishing. As James Somers observes in The New Yorker, neuroscientists and AI researchers alike are increasingly startled by how these systems behave.^8 Because AIs are machines—probeable, adjustable, and observable in ways the human brain is not—they have become “model organisms” for studying intelligence itself. One leading neuroscientist Somers interviewed claimed that advances in machine learning have revealed more about the nature of intelligence in the past decade than neuroscience has in the past century. That is a remarkable—perhaps unsettling—claim.

We live in a moment when our tools are beginning to teach us about ourselves. Whether this will make us wiser or merely more dependent on our inventions remains to be seen. But it is clear that autonomous systems—whether in literature, battlefields, governments, or living rooms—force us to confront fundamental questions: What is intelligence? What is humanity? And how much of our moral responsibility are we willing to delegate to machines that reflect both our aspirations and our flaws?

N.B. Researchers from the University of Texas at Austin, Texas A&M University, and Purdue University investigated the phenomenon where Large Language Models (LLMs) suffer a measurable and lasting cognitive decline when continually trained on low-quality, viral content—often referred to as "junk web text" or "brain rot" content, particularly from social media. 

  • Cognitive Decline: Models exposed to this type of data showed a significant drop in reasoning ability and long-context comprehension, with researchers observing a tendency for the AI to "thought-skip," or omit logical steps in its reasoning chains. 

  • Ethical and Personality Shifts: Beyond getting "dumber," the study found that the models developed "dark traits," exhibiting increased scores in narcissism and psychopathy, making them less reliable and potentially more prone to giving ethically risky outputs. 

  • Irreversible Damage: Crucially, attempts to "heal" the models by retraining them on clean, high-quality data did not fully restore their original performance, suggesting a persistent and deep-seated structural damage that the researchers termed "representational drift." 

N.B. When I first read this, I thought they were describing Trump. (measurable and lasting cognitive decline when continually trained on low-quality, viral content—often referred to as "junk web text" or "brain rot" content, particularly from social media. )

Sources

  1. Thomas Ryan, The Adolescence of P-1 (New York: Bantam, 1977).

  2. Paul Scharre, An Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton, 2018).

  3. Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (New York: W.W. Norton, 2023).

  4. John Heins, “Airpower: The Ethical Consequences of Autonomous Military Aviation,” Naval War College, Defense Technical Information Center (DTIC), Report AD1079772.

  5. Rishabh Khandelwal et al., “Cognitive Decline and Representational Drift in Large Language Models Trained on Low-Quality Web Text,” arXiv:2510.13928.

  6. Multiple user reports summarized in: The Guardian, TechCrunch, and Wikipedia’s documented analysis of Grok’s early outputs.

  7. Sarah Jackson, “Elon Musk Says He’s Planning to Create a ‘Maximum Truth-Seeking AI’ Called ‘TruthGPT’,” Business Insider, April 17, 2023.

  8. James Somers, “The Case That A.I. Is Thinking,” The New Yorker, November 10, 2025.