by liberal japonicus
It seems like it's time to take a step back and see if we can come to some agreement about the use of AI here. This is primarily directed at Charles, which is highly unfortunate, because I don't want to make him an object lesson, but it seems unavoidable.
A while back, Charles fed a comment thread into ChatGPT and then linked to the summary. I was really taken aback by that, the idea of having a conversation and the person says well, I just fed everything we said into this machine, and here's what it said you said (oh, and it will continue to use your words and ideas in the future) was rather off-putting. Not that what I write here would not have gotten into chatGPT, the whole model is built on scraping the net in its entirety and I don't have the time or patience to do what is necessary to chatgpt-proof ObWi. But it seems to be an escalation to actually feed stuff from here intentioanlly and with purpose. So I definitely think that is a step too far and would ask that Charles (and anyone else) not do that.
I didn't write this immediately when that happened, but it has been on my mind. But just now, Tony P. asked Charles:
Do you (or Reason, or Chat-GPT) accept that John D. Rockefeller amassed a huge personal fortune from his businesses? I have to assume you do, because you're not ignorant.
Charles took Tony's request as 'please use a prompt to feed this question into ChatGPT'. I don't think that was what Tony intended (Tony is welcome to clarify this) I think the point was that Rockefeller's wealth was a fact regardless of independent of Charles, Reason, or ChatGPT 'opinions', not a request to have ChatGPT explain how Rockefeller's wealth should be considered different in light of different contexts.
Which has me wonder what precise line to suggest here at ObWi, bearing in mind that I can't force anyone to behave in a particular way, especially in regard to chatGPT, except to ban them, which would be overkill. So it seems important to discuss parameters.
If I had my druthers, I'd prefer that when people tackle a question, you don't simply link to or reprint chatgpt output. The link is particularly insidious, because it pulls anyone who wants to engage with you into the ChatGPT ecosystem. If someone abstains from clicking, you can believe that they won't engage with your arguments, where they are simply wanting to avoid giving Sam Altman anything. However, I think it is important to acknowledge using ChatGPT, which sounds like a bit of a catch-22.
I should also add that I use ChatGPT quite a bit, for things like 'please give me 10 examples of this grammatical pattern', 'please outline this student's paper', 'please make grammatical corrections to this essay without changing the content or the style'. I understand that this opens me up to the charge of hypocrisy. Here I am, feeding student work into ChatGPT and getting in high dudgeon when it's my stuff. Folks are welcome to discuss that, I am a bit uneasy with drawing lines here for other people and not doing it for myself. But there seems to be a difference when I am exchanging and offering my own opinions about something as we do here versus when I am in a situation where I am teaching writing to second language learners and trying to get more examples or show them how to use it
Trying to boil this down, I make the following suggestions
-don't simply post chatgpt (or any other AI) output verbatim with no comment
-if you do feel that the output is better than what you could have written, at least add your own points to it
-if you are going to use LLM, acknowledge any points that you get from them
-realize that in doing so, you are making those points yours, so saying 'well, that's chatgpt, not me' is essentially an abdication of responsibility and makes the process of exchanging opinions much more difficult
This is all unfocussed, so I'm hoping that others might be able to weigh in so we can have an understanding of what the boundaries are. I have to wonder that, if there were some variant Charles from another reality who was as committed to syndicalism as this reality's Charles is to libertarianism who employed ChatGPT in the same way, would I come down as hard? On the other hand, it seems that using ChatGPT in this way is a perfect example of why libertarianism is such a mess: A huge amount of sub-rosa assumptions are fed into it and it miraculously comes up with justifications that seem robust until you start examining those assumptions. When used this way, ChatGPT is just another tool to obscure those assumptions so they are never questioned. Discuss.
To the extent that I understand you, I think that this is similar to the point I attempted to make in the previous thread. I have an interest in what others here have to say. I may disagree with them, but I am interested nonetheless.
However I have no interest in what a LLM may generate, based on vast numbers of examples of which words occur together, combined with the particular phrasing of a prompt it has been triggered by. Hence my request to Charles to tell us what he thinks, rather than what ChatGPT generates. I remain interested in that.
But it seems to be an escalation to actually feed stuff from here intentioanlly and with purpose. So I definitely think that is a step too far and would ask that Charles (and anyone else) not do that.
I certainly can refrain from doing that.
I’m not as articulate as everyone else here. So I tend to over-rely on links, quoted material, and LLMs for content and engagement.
Trying to boil this down, I make the following suggestions
I can follow them.
I have never used an LLM to write or comment any place, but I will suggest that the more unattributed LLM text appears on the internet, the more future attempts to train off internet content will be “poisoned” by the statistical biases created by eating their own dog food.
By the way, I’ve been running DeepSeek R1 Zero at home on a far from state of the art desktop PC, it works remarkably well. The AI industry is trying very hard to dismiss this as a parlor trick. In reality, lots of people are quietly starting to panic. Their multi-billion dollar valuations were based on the idea that smaller companies could never to compete with their hyper-expensive computing resources. They were clearly addressing inefficiencies by throwing more hardware at the problem. Apparently no one was willing to waste time doing some relatively simple optimizations.
I really appreciate Russell’s trick of appending -ai to any search.
And I’ve got a colleague who is also running R1 zero and I’m thinking of setting it up and having students use that.
Having an installation like this handle something like customer service chat, with it clearly delineated as a bot is what I think should happen, but the problem is that when you do that, Westerners often will try to find out how to jump over the guardrails. I find it telling that when Microsoft deployed the Tay.ai twitter chatbot in 2016 with a โrepeat after meโ function, within 24 hours, it was producing tweets that were racist and sexist.
https://www.journals.uchicago.edu/doi/abs/10.1086/715227?journalCode=signs
On the other hand, the Chinese chatbot Xiaoice was deployed 2 years earlier and is still being used
https://www.scmp.com/tech/big-tech/article/3266497/chinas-ai-giants-cosy-virtual-companions-loneliness-drives-chatbot-revenue
I think there is a pathology for Westerners in there, but not sure how to explain it.
Very much what wj said @08.16 in the first comment to this thread.
Charles, I have no doubt whatsoever that your willingness to cooperate with lj’s suggested rules is universally welcome on ObWi, and if I’m wrong I’d be interested to hear it! But what I’d like to say to you is this: we are most of us a bunch of oldsters, and speaking for myself I find I am losing vocabulary almost daily. I now frequently have to either find a way to talk around what I want to say in order to convey the meaning of a word which hovers just out of sight, or even use a thesaurus for almost the first time in my life until the word that was hovering just out of sight leaps out at me.
We all here seem to me to have different strengths and viewpoints, and (again speaking only for myself) I am a lot more interested in yours than in some much diluted downstream version of them in ChatGPT.
There were reports that when Deepseek was released if asked which model it was it responded with ChatGPT 1o. It now says it’s Deepseek.
The versions of Deepseek available through Poe are open-source models hosted by a US company. When asked, “Which large language model are you?”, they claim to be versions of ChatGPT. When I gave one of them a prompt that I had given to the version of ChatGPT I’m using, the response formatting was almost identical and the content was very similar.
Deepseek appears to have reverse-engineered or copied a version of ChatGPT, distilled it into a much smaller model, and used other innovations to give it several magnitudes smaller compute and memory requirements while retaining almost all of the capability of the ChatGPT model.
I now frequently have to either find a way to talk around what I want to say in order to convey the meaning of a word which hovers just out of sight, or even use a thesaurus for almost the first time in my life until the word that was hovering just out of sight leaps out at me.
I’m having the same problem much more often. Even simple words. In a recent conversation, I couldn’t remember the name of Elon Musk’s car company. My short-term memory isn’t getting any better either.
Deepseek appears to have reverse-engineered or copied a version of ChatGPT, distilled it into a much smaller model,
There has been a ton of stuff, but they have generally avoided saying they ‘copied’ ChatGPT but rather taking the output of ChatGPT to create their own version.
https://www.ft.com/content/a0dfedd1-5255-4fa9-8ccc-1fe01de87ea6
โIt is a very common practice for start-ups and academics to use outputs from human-aligned commercial LLMs, like ChatGPT, to train another model,โ said Ritwik Gupta, a PhD candidate in AI at the University of California, Berkeley.
Given OpenAI was so cavalier with copyright, it’s like the kid who killed his parents and then threw himself on the mercy of the court because he was an orphan…
Which is the actual textbook definition of chuzpe.
Chuzpe.. ChatGPT – Is there a connection or coevolution? ๐
Deepseek appears to have reverse-engineered or copied a version of ChatGPT, distilled it into a much smaller model
And so, follows the pattern of human invention for about the last 100,000 years.
This should be of little concern to anyone not depending on AI tech IP for their bread and butter. For those who are, time to back to the drawing board.
c’est la vie.
I used to work with a guy who would say “machines should work, people should think”. I always like that saying.
“Machines should work, people should think”
IBM (advertising?) slogan from circa 1970. Some of us are old enough to remember it. Interesting that it passed into common use.
Hartmut: I’ve never seen it spelt that way before. Assuming that’s what you wanted to say, I’ve always seen it as chutzpah (or very slight variations thereon).
I am not clear if Charles intended the reverse engineering comment as criticism. Assuming DeepSeek is the reversed engineered version of Chaptgpt, what Russell said. It is amazing that they could take it and ,
make it vastly more efficient. If I could reverse engineer a car engine and get one million miles per gallon, I think I would deserve some credit.
China has a well-worn record of attempting to fake it until they make it. DeepSeek works. But they may have used a lot more and newer hardware and spent a lot more money than they’re claiming.
Maybe DeepSeek only cost $6 million after they shorted US tech stocks before releasing DeepSeek. ๐
And so, follows the pattern of human invention for about the last 100,000 years.
Sometimes knowing that something can be done is half the battle.
Even simple words. In a recent conversation, I couldn’t remember the name of Elon Musk’s car company.
Yup. Just the same sort of thing with me. It’s terrifying/horrifying. Of course, with easy access to Google, it’s usually quickly dealt with. The two phenomena may not, of course, be unrelated.
GftNC, I used the German spelling by mistake.
I think “chutzpah” is Hebrew as transliterated by English speakers, “chuzpe” is Yiddish, as spoken in Germany and transliterated by German speakers.
Maybe DeepSeek only cost $6 million after they shorted US tech stocks before releasing DeepSeek. ๐
Don’t hate the player, hate the game.
I do understand the possible negative outcomes (for us) if China outstrips us in tech, but if they found a way to make a useful AI platform that doesn’t require repurposing obsolete nuclear power plants to run it, I’m gonna call it a win, net/net.
Now it’s our turn to take what they’ve done and improve upon that. Or come up with something that addresses the same need in a different way.
I wish someone would make the same optimization to crypto farming, for the same reasons.
But what I really wish is for folks to use all of these gee-whiz gadgets for something other than making a handful of antisocial weirdos un-freaking-believably wealthy.
My understanding is that LLMs scour the internet for statements which seem relevant to the question asked, and boil them down into tolerably well-written summaries. They’re not intelligent in the sense that they apply reasoning to what they’re reproducing. Furthermore, in some circumstances they’ll extrapolate from what they’ve found, in an unconsidered way: that is, they make stuff up.
Recently, a well-known online chess commentator has run a series of chess games between LLMs, for amusement. The contestants produce plausible-sounding strategic analyses, but tactically they’re beyond hopeless: they’re merely playing moves which they’ve found recommended in similar-looking positions. Also, they sometimes play illegal moves. That’s all their methods can do.
LLMs might have a use when checking for ideas one may have overlooked. But I see no value in posting their output directly, and I agree with discouraging that.
I’m interested in the opinions of thinking people who disagree with me. If they want to use LLMs to gather their thoughts, so be it. It should go no further.
I wish someone would make the same optimization to crypto farming, for the same reasons.
One of the necessary conditions to make the original crypto currencies work is that mining them be extraordinarily difficult. If not, then people can just crank them out and the coins become worthless. That’s quite different from meme coins (eg, $TRUMP tokens). There, crypto only ensures that you “own” one of the limited number of coins. In that case, it’s not mining that limits the supply, it’s a matter of how much money Trump wants to “print” and sell. Bitcoin is nominally not fiat currency; $TRUMP tokens are.
The Proof of Work (PoW) method used by Bitcoin to ensure the validity of the blockchain takes a lot of energy.
Proof of Stake (PoS) methods don’t use much energy.
As someone who has been teaching writing for 20 years now, and spent time as a DB analyst before that, I still maintain that LLM AI is a net loss for all involved. It works now primarily because it is harvesting the intellectual and communicative work of centuries and has been trained to curve match the inputs that its trainers have encouraged.
As an educator that teaches critical thinking and communication, what matters most for my students is not the product that they produce for my class, but rather the critical perspectives and habits of mind that they develop while producing their writing. Eloquence is not only a matter of finding the best words to fit one’s thoughts or information; eloquence arises from the process of assembling, analyzing, and synthesizing your information with new insight in response to changing contexts.
All of the current studies of AI and creativity, or AI and summary, are being evaluated and trained by people who have developed those skill without the benefit of AI being there to speed and prettify the project. They can take the AI output and improve it because they can do the critical thinking work themselves and know how to recognize when AI is bullshitting or glitching.
A generation along in AI integration we will have lost most of the critical thinking abilities that support the higher order work and be left mostly with users who are only capable of mid-level work.
The struggle is the learning. No struggle, no learning.
My students whose notes consist of furiously typed transcripts of their readings and my in-class explanations never transfer much of that from short-term to long-term memory because they have never had to work to integrate those ideas, synthesize them, and make them their own. Hand writing notes helps, but the real magic of hand writing comes from having to find ways to shorthand and meta-comment, to connect, and to formulate questions. The slowness of handwriting forces more of that synthesis and transference.
I really do fear that I have at most five more years of effective teaching before LLMs become so embedded in our systems that we will be satisfied with the shallow eloquence of received opinion and no longer value the struggle to make deep insight communicable.
I hope the economy survives longer than that. I’m going to need the pension I have contributed to for the last twenty years when our civilization hits its twilight and decides that my wife and I are no longer a necessary part of a high-level education.
Just for fun, here is Alex O’Connor getting ChatGPT to agree that God exists.
https://youtu.be/wS7IPxLZrR4?si=PQuiGV8Q7svK3PLW
It’s 20 minutes long, just FYI.
Alex O’Connor may be known to GftNC or others of our British friends as a minor celebrity in anti-monarchist circles, or maybe in atheist circles. He is impressively articulate. Also, judging from this particular example, he has a sly sense of humor.
–TP
Thank you, Tony P, I did not know of this character. I have to admit, I was more interested in Ground News than his game with ChatGPT. Weirdly, it (Ground News) was recommended to me recently by someone who believes in various conspiracy theories. I will have to check it out.
Thank you, Tony P, I did not know of this character. I have to admit, I was more interested in Ground News than his game with ChatGPT. Weirdly, it (Ground News) was recommended to me recently by someone who believes in various conspiracy theories. I will have to check it out.
Just sent out emails to three students about suspected AI use in their written responses for class. All three are international students who are struggling with the demands of a writing course in a language they are not yet fully comfortable inhabiting. They get lost in the reading and panic. They hunt down the reading and run it through translation software. They put the story through an LLM with a prompt that asks the LLM to analyze it, then put the output through translation as well. Then they write their responses in their native language, run it back through a translation program, and use that to spruce up their response.
It sounds pretty good to an undergraduate who is not yet comfortable with critical thinking, but I see a lot more promise in the more limited responses of their peers who are struggling through the language barrier and giving thoughtful responses that stretch the seams of language skill that is too small to contain their insight.
I could turn all three of them over to the academic honesty people. I know that all three are freaking out. I know that they are all three deeply embarrassed to have been singled out for their language struggles. I just need to get them to knock it off with the planet killing Cliff Notes so that we can get them to make some mistakes that they can learn from and not spend all their time trying to avoid error.
It’s a real struggle, getting them to trust you enough to show their weaknesses, and a delicate balancing act trying to achieve that while still treating their work with enough rigor to prepare them for the mess of a world we are leaving for them.
Which is mostly just my own idealistic struggle. If the tech bros and the university administrators have their way it will be like a culinary school at which the apprentice chefs spend all their time preparing those meal-in-a-box kits, and using prepared sauces from the restaurant supply.
get them to make some mistakes that they can learn from and not spend all their time trying to avoid error.
I don’t have your experience in an academic setting. But I have found that, when I have been teaching elsewhere, one of the biggest hurdles for all of my students is to get across the idea the it is OK to try something new and make mistakes. In fact, it is a positive good, because they learn faster.
I wonder if, somewhere in their earlier experience, they have had it impressed on them that any mistake is to be avoided at all cost. And where, in their previous experience, that was taught. If we can find that, there may be a way to change it, to the benefit of the students.
I see something that may be related when junior staff are being given instructions for some new task (say coding a module for software we are developing). Routinely, we ask if they have understood the assignment. The idea being for them not to waste time and resources doing the wrong thing. Almost always, they answer Yes. Regardless of whether they actually do.
Well, this is right where I am, so I feel your pain. Unfortunately, the Japanese students I teach might be a step behind your students cause they don’t understand that they are supposed to be presenting me with something they have come up with, an idea that they think is interesting. That ole Japanese form over function.
I just had a session with a student who is writing about Black Widow and feminism. She’s written these detailed summaries of all of Black Widow’s appearances in the MCU and has looked at the origin stories where she is a Russian femme fatale. I’m sure that she’s written everything in Japanese and then run it through deepl and stitched it together, and the chunking in the paper is what tells me she is doing that. If I could get one authentic insight, one idea out of her, I’d be over the moon. But the process of writing is so hard for her (and others) that they can’t sit down with a pencil and paper and write anything, and if I force them to do that, they will probably be scared of writing for the rest of their lives. I’m really at a loss and this has become my annus horribilis, though I have a bad feeling it marks the start of anni horribiles
wj: I wonder if, somewhere in their earlier experience, they have had it impressed on them that any mistake is to be avoided at all cost. And where, in their previous experience, that was taught. If we can find that, there may be a way to change it, to the benefit of the students.
In my case, and I suspect in many people’s, it was taught at home, from my earliest days, sometimes explicitly, sometimes implicitly, and mostly unconsciously on everyone’s part.
Additionally in church, of course. And then at school.
So good luck rooting it out.
Responding to nous and lj: I’ve done a lot of writing and editing, the latter at work; and as a long-term volunteer for a non-profit; and for friends and family (including many indie-published books).
One thing I’ve noticed is that people whose “formal” writing is stilted, unclear, and only sketchily grammatical can often write perfectly lucid and more or less grammatically correct emails about personal matters to friends.
I think this is partly because people are daunted in advance by formal contexts, and partly because they don’t actually know what their thought train is or should be and therefore can’t possibly write it clearly on the first try.
Sometimes we (“we” but certainly I) figure out what we think in the process of trying to write it down….
Funny this topic should come up — I am rereading Middlemarch for the umpteenth time, and as soon as I started it I was comforted and delighted yet again by the long, complex thought trains embodied in long, complex, well constructed sentences.
For whatever mix of reasons involving aging, interest, attention, and the state of the world, I’m doing almost no new reading these days. (New mysteries by my favorite authors are an exception; also anything by David Mitchell, though the gaps between his books are measured in years.) Hopefully no one is going to come along and confiscate my shelves of old paperbacks. ๐
In my case, and I suspect in many people’s, it was taught at home, from my earliest days, sometimes explicitly, sometimes implicitly, and mostly unconsciously on everyone’s part.
Additionally in church, of course. And then at school.
Alas, in this as in so much else my parents apparently failed to get the memo. When we tried something and failed, they would walk us thru the right answer/technique/etc. But the only thing we got negative feedback on was failing to try.
My parents didn’t mind their kids learning from experience as long as they did the learning before the experience.
figure out what we think in the process of trying to write it down….
The classic German quote on that is: “How can I know what I think before I read what I write?”
Kleist wrote a whole essay about “The manufacture of thoughts by/during talking” He could almost as well have used “writing”.
Cato the Elder is quoted with “Rem tene, verba sequentur!” (Hold on to the thing/topic and the words will follow). Again, writing could easily be substituted for talking.
“Thinking aloud” follows the same pattern.
IIRC, Joan Didion also said she wrote to find out what she thought.
There are a few schools of parenting that revolve around demanding that a kid works to meet an external standard of expected performance, and that heaps on shame and punitive measures with the thought that pressure makes diamonds. I’ve read dozens of student papers where the writers learned to hate music because of the parental pressure to achieve technical mastery of an instrument with only a mechanical and extrinsic sense of expression.
These kids have been raised in an education system that does the same with their writing.
And while I need to convince them to trust me and to take risks in order to further their learning, I also have to train them to understand the signs that another teacher or boss comes from that other school so that they can tell when trust may not be warranted. It’s a pretty common trait to run into on many parts of the campus.
This brings up a crucial conversation regarding the role of AI in online discussions. It appears that transparency and personal responsibility in arguments supported by AI are essential; otherwise, it risks obscuring the true value of genuine human exchange.
Here is another thing to be aware of, and it supports what Iโve been telling my students: LLM chatbots reinforce the status quo:
https://www.anthropocenemagazine.org/2025/02/how-ai-narrows-our-vision-of-climate-solutions-and-reinforces-the-status-quo/
The only thing that changes the status quo is imagination. And that’s something that LLMs are inherently incapable of.
https://callingbullshit.org/
https://thebullshitmachines.com/table-of-contents/index.html
Sorry for bare links … interesting offerings from BJ commenters (last night, I think).
Meanwhile, the CSUs are surrendering to their big tech masters and throwing faculty and students to the AI disruption industry wolves:
https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powered-Initiative.aspx
I’m betting the CSU Faculty are deeply split on this, and I would be surprised if this does not lead to a huge brain drain – though where they will go is a mystery.
Only time will tell if the UC Regents decide to follow suit and put their marker on the AI bet as well. I don’t have a lot of faith in them to do the wise thing. The Regents come from the same tainted pool that the AI consultants swim in.