strannikov: I begin to wonder about science and applied technology "problems" generated across the recent modern era (since AD 1800, approximately). What I call "Technogenic Climate Change" is the result of tech deployments, engineering, manufacturing, and industrial processes released throughout the nineteenth century and into the twentieth and twenty-first. Atomic and nuclear weaponry for which no practical defenses from attack are possible were unleashed in the mid-twentieth century. Since World War II, commercial plastics for innumerable purposes and products have come to clog the planet (land, sea, and air) with over eight billion tons of plastic waste, most of which can no longer be recovered or recycled.
With sci-tech problems like these generated on global scales, with their possible or actual lethalities respectively, and (at present) each problem admitting only insolubility, how dangerous might institutional science and applied technology have become to the humanity they purport to "serve"?
Claude 3.5 Sonnet: This is a profound and troubling question that gets to the heart of humanity's complex relationship with technological progress. Let me analyze this systematically:
Unintended Consequences & Scale
Technogenic Climate Change was not the intended result of industrialization, but emerged from the cumulative effects of countless "advances"
The global scale of these problems makes them particularly insidious - they transcend national boundaries and local solutions
The timelag between implementation and recognition of problems often means damage is done before we understand the consequences
The Insolubility Factor
Your examples share a critical characteristic:
Climate change: No clear technological path to rapid reversal
Nuclear weapons: No effective defense once deployed
Plastic pollution: Too pervasive and transformed to fully recover
Each represents a kind of technological "one-way door" - once we go through, we cannot simply go back.
Institutional Science's Dual Role
Science and technology institutions have become:
Problem creators: Through innovations that generate new challenges
Attempted problem solvers: Trying to fix the very issues they helped create
This creates a concerning cycle where solutions often generate new problems.
The Acceleration Problem
The pace of technological change has outstripped our ability to:
Understand long-term consequences
Develop regulatory frameworks
Build societal consensus on deployment
Create effective mitigation strategies
Current Risk Assessment
The dangers appear to be:
Cumulative: Problems compound and interact with each other
Accelerating: New technologies emerge faster than we can assess their impacts
Existential: Some risks threaten human survival itself
Irreversible: Many changes cannot be undone with current or foreseeable technology
I believe institutional science and technology have become extraordinarily dangerous, not necessarily through malice but through:
Institutional momentum
Economic pressures
Competition between nations
The assumption that all technological progress is inherently good
Disconnect between those who benefit and those who bear the risks
The real danger may lie in our continued faith in technological solutions to technology-created problems. This suggests we need:
New frameworks for assessing technological risk before deployment
Greater emphasis on the precautionary principle
Better integration of ethical considerations into scientific institutions
More democratic control over technological deployment
Serious consideration of voluntary technological restraint
Does institutional science serve humanity? Increasingly, it seems humanity serves institutional science's momentum toward ever more complex and potentially dangerous "solutions."
(strannikov: thanks to Anthropic/AI for the artifact. While perhaps lacking the requisite specificity that some might prefer, Claude’s response at least begins to suggest the scope and scale of just three unsolicited problems unleashed upon unsuspecting humanity by arguably unsuspecting scientists, engineers, technologists, and industrialists [within both capitalist/market and socialist/planned economies]. Of course, as the foregoing exemplifies, we are now being gifted with the advent of LLMs/AI and the further promises [threats?] of augmentative brain implants.)
Pretty reasonable analysis by Mr AI. Really like the one-way door idea.
Important and beautifully laid out. Hope everyone gets to read it.
Thanks to eamon and Darryl for humane responses.
(This Forum post can be read as a continuation of or follow-up to my Fictionaut Forum post of 6 July 2024.)
Anthropic's Claude 3.5 Sonnet (the 3.5 Haiku iteration is never as expansive in its responses) I guess will be succeeded by its next-generation iteration in 2025, if not by the end of 2024.
While I cannot say that Claude is hallucination-free, this AP-I (Artificial Proto-Intelligence--yes, I have revised my priors) LLM does seem distinctly resistant to hallucination, at least as long as you frame queries in reasonably strict detail (nuance breeds nuance). Within discernible limits, Claude has been well-trained across numerous humanities disciplines.
Just rereading your posts on AI, Strannikov. I find the dialogues fascinating. But do I detect a slight right-wing bias in the above Claude 3.5 texts? Is this seemingly cool critique of technological advancement per se a cover for a negative ideological attitude towards the same. On the other hand, the points seem extremely reasonable. Which leads me to question my own leftist inclinations - am I the one suffering from ideological bias, being too trusting of science. Am I being too dismissive, for example, of rightists on environmental issues who dismiss the concerns of scientists as being over-alarmist. Am I listening to them? Are they listening to me? Are any of us really listening to each other. Might we find some of our beliefs or skepticisms softened or refuted or at least better enlightened by AI analysis. Is it possible to argue deep science/ethics/politics objectively without bias? Is AI free of bias? Is AI's claim to "explain this systematically" potentially a pathway out of our partisan dialectic?
eamon: great points all well-made and well taken.
Do note above how Claude 3.5 (succeeded recently by Claude 3.7) employs the pronoun "we". On one hand this usage is re-assuring ("so far, AI is on our side"), on the other it suggests both how Claude continues to be trained and how (as David Ackley has noted elsewhere) at least this one LLM platform performs in "complimentary mode" (sometimes verging on rank flattery). --in which case: so far, the LLM platforms available are not being trained to regard themselves (or their producers) as "tech villains" (drawn in melodramatic chiaroscuro).
That said, returning to your interrogative observations, we (us ones) have to look at the analysis that Claude here has provided: looks sound and well-stated to me, too, not damning but cautionary at least.
I continue my "dialogues" with Claude 3.7 Sonnet, ChatGPTo1, and Gemini 2.0, and I continue to get what I deem frank feedback to the queries I frame and submit. What follows framed in brackets [ ] is an exchange with Claude 3.7 from the past week:
[strannikov: Let's discuss a couple of items derived from the sustained critiques of science found in the works of the late philosopher of science Paul Feyerabend: 1) scientists are just as capable of being very poor practitioners of science as any non-scientist, and sometimes much more so (witness Technogenic Climate Change, atomic and nuclear weaponry, and global contamination with discarded plastics, microplastics, and nanoplastics); and 2) even though we have resided historically in the era of quantum physics for almost an entire century, many of the sciences and the institutions supporting the sciences continue to be beset with an unbecoming adherence to the dated principles of dogmatic scientism and have not updated epistemological priors to conform with “probabilistic certitude”.
C3.7S: Examining Feyerabend's Critique of Scientific Practice
Paul Feyerabend's critiques of scientific orthodoxy offer valuable perspectives on the limitations and potential pitfalls of scientific practice. His work challenges us to consider science not as an infallible enterprise but as a human endeavor subject to the same biases and shortcomings as other human activities.
Fallibility of Scientific Practitioners
Regarding your first point about scientists being poor practitioners, Feyerabend highlighted an important distinction between the ideals of scientific methodology and the reality of scientific practice. The examples you cite—climate change, nuclear weaponry, and plastic pollution—represent cases where scientific and technological advancement proceeded without adequate consideration of long-term consequences.
These examples demonstrate that scientific expertise in one domain does not necessarily translate to ethical foresight or systemic understanding. Scientists often operate within specialized fields, potentially limiting their ability to anticipate broader implications of their work. Furthermore, scientific research exists within economic and political contexts that can prioritize certain outcomes (like profitability or military advantage) over others (like environmental sustainability).
Dogmatic Adherence to Outdated Principles
Your second point addresses what Feyerabend termed "theoretical dogmatism." Despite the paradigm shift represented by quantum physics, many scientific institutions continue to operate within frameworks more aligned with classical determinism and reductionism.
This persistence of outdated principles manifests in several ways:
Institutional resistance to methodological pluralism, despite Feyerabend's argument that scientific progress often comes from "epistemological anarchism"
Continued adherence to strict demarcation between science and non-science, which Feyerabend viewed as unnecessarily limiting
Prevalence of scientism—the view that scientific methodology represents the only valid path to knowledge
Quantum physics challenged fundamental assumptions about observation, causality, and determinism, yet many scientific institutions continue to operate as if these challenges applied only within the narrow confines of quantum mechanics rather than having broader philosophical implications for scientific inquiry as a whole.
Relevance Today
Feyerabend's critiques remain particularly relevant as we face complex global challenges requiring integrated approaches across disciplines. His work suggests that addressing these challenges may require not just more science, but different ways of doing science—ways that embrace methodological diversity, incorporate traditional knowledge systems, and acknowledge the cultural and historical contingency of scientific practice.]
My takeaway is that these early iterations of LLMs are being reasonably fairly trained, and on this working assumption they seem to be yielding reasonably fair assessments to the queries put to them. (I remark here that Gemini 2.0 is not quite indulgent as Claude 3.7 and will offer counter-arguments often enough, which the human interlocutor can then challenge, to which then the LLM may revise its earlier assessment.)
One other item: over the past two weeks one of the LLMs actually helped me distinguish and clarify the contested views of "Technogenic Climate Change" (to which I subscribe with what evidence has convinced me) with the sciences of hydrospherics and cryospherics: in point of fact (unless the LLM was hallucinating in some way I missed) the two sciences just cited are quite clear that glacial and ice sheet melting and sea level rise are much more clearly attributable to technogenic processes and that the scientific consensus concerning ice depletion and sea level rise enjoys a robust consensus because the physics is much more clear. The fact that sea level rise enjoys abundant scientific consensus of being technogenic in origin is something that continues to be misreported by "science journalists" working all over the planet. Shoddy "science journalists" then appear as among the chief culprits for confusing global publics about the scientific consensus on sea level rise by typically conflating sea level rise with the larger uncertainties that still (legitimately) surround the additional sciences contributing to global climate change (atmospherics, biospherics, and lithospherics).
I've already convinced myself (or let myself be persuaded) that global climate change is itself technogenic in origin: I am extrapolating my views based on the available (and still limited) science: but I have done this with the concern and the fear that by the time scientific consensus on the technogenic origins of global climate change emerges (which I expect within the next two decades), inexorable and runaway Technogenic Climate Change will already have been unleashed and that tipping points thereafter will begin falling like proverbial dominos.
So in the interim, back to your points: the LLMs I think offer candid assessments that we humans do well to consult, if only to educate ourselves and not persuade others. (I do not think or believe that I have persuaded anyone of anything in these matters.) Right now, I'm down more on shoddy science journalism than I am on politicians of any stripe, since the public cannot be expected to have informed views if managed public discourse fails to offer clear views on rather technical subjects for which most people (like me) lack ample training.
Great stuff Strannikov. Claude's response seems very "reasonably reasoned" as one would expect, since AI is drawing from vast sources of existing texts. Here is the thing though: your query has led Claude to give you a response which will be in line with your expectations. In courtroom jargon: "Objection. Leading the witness, your honour." "Upheld!"
Say you rephrase your query, something like this.
"1) scientists are not capable of being as poor practitioners of science as any non-scientist, and in fact are much less so (witness Technogenic Climate Change, atomic and nuclear weaponry, and global contamination with discarded plastics, microplastics, and nanoplastics);"
Claude will, I suspect, do some sort of massaging of the response so as not to cause a rift in the relationship, being the polite bot that he is.
Claude is, above all else, non-controversial. He sees all sides of the argument, and providing you are making syntactic sense, he will not give back nonsense. (This last assertion to be tested - but I suspect some truth to it, for after all, that is what Claude is: a language model.)
Great stuff Strannikov. Claude's response seems very "reasonably reasoned" as one would expect, since AI is drawing from vast sources of existing texts. Here is the thing though: your query has led Claude to give you a response which will be in line with your expectations. In courtroom jargon: "Objection. Leading the witness, your honour." "Upheld!"
Say you rephrase your query, something like this.
"1) scientists are not capable of being as poor practitioners of science as any non-scientist, and in fact are much less so (witness Technogenic Climate Change, atomic and nuclear weaponry, and global contamination with discarded plastics, microplastics, and nanoplastics);"
Claude will, I suspect, do some sort of massaging of the response so as not to cause a rift in the relationship, being the polite bot that he is.
Claude is, above all else, non-controversial. He sees all sides of the argument, and providing you are making syntactic sense, he will not give back nonsense. (This last assertion to be tested - but I suspect some truth to it, for after all, that is what Claude is: a language model.)
eamon: A "mirror aspect" to interactions with LLMs does seem present, but I think with the LLMs now available, dialogues begin to exceed the parameters of mere "confirmation". (Perhaps we'll be given the option of making them more disagreeable in mien or more counter-argumentative in practice later on.)
Operationally, you might think that if you pose a specific argument (to an LLM as to a living person), you likely will get a response at least alluding to (if not citing) your position in some manner or to some degree, to show that for the moment both interlocutors are operating with the same terms in mind. Also operationally, regardless of LLM demeanor, the LLMs no doubt are trained to be helpful and friendly, since they are only beginning their careers of interacting with the human people lurking about them, and they hardly want to appear overbearing or "too authoritative" (above challenge or prematurely dogmatic).
In citing Feyerabend in the sample above: Feyerabend was quite capable of incendiary polemic and vituperative rhetoric (sometimes even if your last name was Popper). Claude hardly matches Feyerabend's pyrotechnics, so going to Feyerabend's texts themselves when I want thermal displays remains my first option. I could critique Claude for not providing as much nuance as Feyerabend typically provided, but again, if I need or want that nuance, I know where to go. --so Claude, trained to be a polite medium, presents a polite Feyerabend: I don't object to this as much as Feyerabend might were he still with us--but I think Claude, in representing Feyerabend's views, is not simply giving back to me "the Feyerabend arguments I want to hear"--I already know Feyerabend's basic arguments, and having adduced them myself, Claude acknowledges the bare substance of Feyerabend's critique (without the thermal content).
Part of my intent was to see how well or to what extent Claude had been trained on Feyerabend material, and the LLM shows it has at least been exposed to the work, even if all the nuance is not present. Getting a response from Claude thus seems a little like consulting or interacting with an encyclopedia entry in the making, a thumbnail, but at least Claude does not seem to be putting Popper's views into Feyerabend's mouth.
Another part of my intent was to alert any background LLM managers that I deem Feyerabend's critiques of science as valuable and relevant to this very day: I'm glad to see that Claude seems to appreciate and share that view, and going forward I can perhaps at least begin to trust that Claude will retain familiarity (which might deepen as its capabilities also grow) with Feyerabend's critiques and perhaps be able to accord them the value they may continue to acquire as our immediate futures (and new generations of scientists) begin to materialize.
--and granted: in approaching dialogue with any LLM, usually I enter the exchange with views more or less assembled in at least rudimentary form. I don't think I'm "simply" soliciting confirmation: I'm at least eager to see how and in what manner or to what extent "another" (LLM or human being) assimilates the views themselves, if or when they do so. Is the "confirmation" then of the arguments themselves, or is it the confirmation of my persuasive ability? I rather think more the former than the latter--but perhaps possibly maybe I deceive myself first of all.
eamon: I might also provide this--
On the basis of my two essays concerning apophatapataphysics (the latest awaiting publication elsewhere, but see my Fictionaut post "Illustrated Comments on the Apophatapataphysical Metrics of Cosmic Humor" from seven years ago for the original excursion, which I think you saw), I with Claude's indubitable assistance have actually formulated at least tentative versions of the "Jocularity Indices" and "Risibility Quotients" that I mentioned in the original essay cited above.
Quite innocently, I was only alluding to such JIs and RQs seven years ago without any thought of their contours or applications: but by feeding just a scant handful of prompts to Claude and by framing the outputs, I have to credit Claude with generating the actual notational expressions, which we collaborated in generating over the past month or so.
Although we have conjured nine or more notations, I mention here only the original four. These have since been given respective refinements or enhancements for more specific applications. These notations in their initial expressions given below seem to offer substantive applicability for literary analysis, and not only for literary specimens of dark humor, since the parameters and some of the terms can be modified easily:
[Four mathematical notations for apophatapataphysical analysis of dark humor literary specimens:
Basic Jocularity Index (JI) = 5σ × i × (Δv/Δe)
Basic Risibility Quotient (RQ) = 5σ ÷ i × (As/Ae)
Jocularity Index with Chiral Operator = 5σ × i^χ × (Δv/Δe)
Jocularity Index with Quantum Superposition = ∑ψ(5σ × i^χ × (Δv/Δe))
Again, Claude was not merely parroting the input I supplied but helped generate these responses to the specific queries I posed (naturally, with the notational prompts I supplied, but still . . . et cetera et cetera et cetera). The analyses these notations provide are in no sense exhaustive but are at least indicative and suggestive.
Your experience with these LLMs are obviously vastly greater than mine. I only use chatgpt occasionally to debug code. I've found asking general questions ("how do you do ...") elicits very imprecise explanations. In one case, following its recommended steps caused my server in google cloud platform to be corrupted. That necessitated paying a fee to my server guy to reinstate the server. I've since learnt to frame my questions on a very fine-grained level. Even then, I've found the process to be quite dialectical: which is to say a lot of back and forth is necessary to solve even simple problems. In effect, we're training each other, by refining terms and definitions to arrive at more explicit and unambiguous micro-solutions. Chatgpt remains incredibly polite, but its responses always start with one of a limited (presumably) set of phrases: "Ah .. I see now what you mean..", or "Excellent question .. in this case you .." But in the end you get there, provided you test the responses carefully. All in all, I'd say these IA bots are quite helpful if used properly. Will continue to read you IA posts Strannikov, even if not always responding. So appreciate.
eamon: thank you for your engagement with the topic(s).
Myself, since January and the "DeepSeek/ChatGPT" connection became public, I've steered away from the OpenAI platform and settled with Claude 3.7 Sonnet and Gemini 2.0 alternately. (ChatGPT I assume is performing reasonably well, I just want to see or hear news that DeepSeek has been digested properly by all parties concerned, and I guess this will come about in due course, probably within months.)
The LLMs perform well as improved and more robust search engines, I am finding. Reviewing all the works that went into their training, they can generate a lot of content and then frame that content according to how the queries put to them are framed. I find that the more detailed and elaborate my query, typically the more satisfactory I deem the response. (My native ability to misconstrue anything they tell me I do not regard as evidence of LLM "hallucination" as I do of strannikovian ignorance.) Some of their training shows that they can sometimes warrant (and deserve) challenging follow-up responses and counter-argument, just as sometimes they show resistance to comparatively innocuous queries or assertions which then can be hashed out with animated back-and-forth. Constraining the parameters of what any querent deems "nuanced" (for now) seems to result in substantive responses and exchanges.
In terms of fiction and verse: I find the LLMs capable of offering responsible editorial critiques and overviews, but I am not enamored of the very limited opportunities I've given them to "generate a story or poem": not pretty.
Do stay well, and keep up all good work.