Taxacom: no artifical 'intelligence' yet
Jared Bernard
bernardj at hawaii.edu
Mon Feb 3 14:26:29 CST 2025
I'm no fan of AI. While it does make sense that it's only as good as it's
sources -- as it would be much scarier if it could think for itself -- it
seems to me that it often gets things wrong. I notice this on the Google AI
overview, which I find really annoying but cannot disable. If you look at
the sources, you find that the phrases are often misattributed.
The problem is that non-experts can't tell if it's wrong, so people
mistakenly think it's just as good. What I find really unnerving is that
universities are embracing ChatGPT for students to help them write
assignments, thereby ensuring we'll get generations who can't think for
themselves. The way that large language models work -- predicting the next
word in a phrase by scraping the web and looking for patterns -- it often
plagiarizes text from papers, albeit badly. My spouse is a botanist and it
has regurgitated her own work back to her, or lifted phrases from her
website. And it doesn't always show its sources. At least journals are
putting limits on how authors can use AI, if at all.
My spouse has studied whether AI can at least be useful for doing
time-consuming weed risk assessments (if there's ever one thing you'd
rather be automated) and found that it can't. So you would think that for
now, our jobs would be secure, except that non-experts can't tell the
difference (or care). Plus, OpenAI is developing "Deep Research", which I
think is supposed to actually replace researchers!
More information about the Taxacom
mailing list