Caution to newbies and A.I.

cdoc42

New member
If you are new to reloading, if you research information on Microsoft's "Bing" A.I. (or perhaps any such system), be sure to continue to research that supports what you are told.

I have encountered errors on A.I. in medical information and I just encountered one in reloading. I asked for a comparative burn rate between H870 and US869.
The reply I got said H870 is 146 and US869 is 173, so US869 is slower than H870.
The problem is this data was extracted from 2 different burn rate tables. One had a total number of 173 powders and H870 is not listed (because it is no longer available) On that table, US869 is 173.
The other table used has a total of 150 powders and H870 IS listed as 146.

Interestingly enough, US869 is also on the 150 table and it is 149. So it IS slower than H870 at 146.

Why AI didn't just report the 150 table is beyond me. But you simply cannot use the comparison AI gave at all. It is completely a useless reply.
 
Expecting "artificial intelligence" to give you good answers in relation to reloading cartridges seems akin to tying your own shoelaces together and being surprised when you trip.

Bad idea.
Why one would think such could be helpful is beyond me.
AI can do a few things well - if it was trained on those things. Everything else, it does poorly; because it was not trained on those subjects.
 
There are multiple burn rate tables easily available on-line. It would never have occurred to me to ask artificial "intelligence" to generate information I can easily look up directly from reputable and reliable sources.
 
AI uses the searchable info online to create answers.

One test, it passed the Bar with a very respectable score, passed the medical boards with average scores and failed the professional engineer test. Interestingly enough, it also failed the written Plumbers exam.

When the internet has been riddled with folks stating things that are not true, especially politicians, actors and TikTokers, what do you expect. Junk in, junk out.
 
An attorney recently used AI (ChatGPT) to prepare his brief in an actual lawsuit. To check it, he just asked ChatGPT if the case law citations were real, and ChatGPT replied, 'Yes." So he submitted the brief.

Problems arose when the opposing counsel received the brief and started looking up the citations. He couldn't find them, so he reported it to the judge. The judge's staff couldn't find them, either -- because they didn't exist. ChatGPT had simply made them up. The attorney in question was censored, his law firm was fined, and he's lucky he didn't lose his license entirely.

AI is still in its infancy and should NOT be relied upon for anything, IMHO.
 
Interestingly enough, US869 is also on the 150 table and it is 149. So it IS slower than H870 at 146.

Consider, what you are looking at are RELATIVE burn rates, nowhere on those lists are the actual burn rates, or any information about the AMOUNT of difference in burn rate, or if it is significant.

Sure, 146 on the list is faster than 149. HOW MUCH FASTER/SLOWER is one over the other? We don't know and the list doesn't say.

GIGO (Garbage IN, Garbage OUT) is still the base under all computer programs. IF the data the program looks at /runs on isn't correct or accurate, the results won't be, either.
 
An attorney recently used AI (ChatGPT) to prepare his brief in an actual lawsuit. To check it, he just asked ChatGPT if the case law citations were real, and ChatGPT replied, 'Yes." So he submitted the brief.

Problems arose when the opposing counsel received the brief and started looking up the citations. He couldn't find them, so he reported it to the judge. The judge's staff couldn't find them, either -- because they didn't exist. ChatGPT had simply made them up. The attorney in question was censored, his law firm was fined, and he's lucky he didn't lose his license entirely.

AI is still in its infancy and should NOT be relied upon for anything, IMHO.

It isn't impossible the ChatGPT "found" the citations in a work of fiction on-line somewhere. Considering the "matches" I get with various search features (Amazon is atrocious for providing things that do not match search criteria) if you don't provide a lot of constraints where the idiot-savant is going to look for details you may get more "idiot" than "savant" in the response.
 
ballarddw said:
It isn't impossible the ChatGPT "found" the citations in a work of fiction on-line somewhere.
That may be possible, but it's irrelevant. AI gave the attorney bogus information that got him in a lot of trouble.

The lesson is that AI is not something that can be trusted or relied upon when factual, objective information is required. The consensus of several articles and a YouTube video on the lawyer case was that ChatGPT simply fabricated the bogus case law citations, but it doesn't matter. Either ChatGPT lied (fabricated the citations) or made a serious mistake (mistaking fiction for fact). And then, when asked specifically if the citations were real, ChatGPT lied (or was "mistaken") again, assuring the attorney that the citations were genuine.
 
Burn rate charts are approximations and are not driven by actual specific data. To apply statistical analysis, and calculate a value is pure fiction.
Burn rate charts are good for looking at but they are not, and will never be, factual data.
Personally, I think any intelligence these idiots ever had is artificial, and tainted with double-dosage dumb-@$$ wrong. Food fights make more sense.
 
totaldla said:
That article is spot on, and the problem existed before computer programs came to be called "artificial intelligence." The distinction is basically just a matter of degree.

In real life, my secret identities are (1) architect and (2) building inspector. I hold licenses in both fields. Architects study structural engineering as part of our education, and we are legally allowed to design structures. Except for very simple buildings, mostly we don't -- we design what it's going to look like and then we hire a structural engineer to handle the nuts and bolts of making it stand up.

In structural engineering, trusses are considered to be "indeterminate" structures, which is a simple way of saying they can't be reduced to the application of a formula. In my structures class, we learned how to solve trusses using a graphical approach. Today there are computer applications that design trusses -- typically the ubiquitous wood trusses you see being put up as roofs on houses and smaller wood-framed buildings. It's a given that such programs are supposed to be used by people with sufficient education and experience to be able to recognize when the application generates a result that just doesn't look right.

About a year ago I reviewed plans for a house. I questioned the structure in a couple of places. The builder got upset and insisted he had built that same house a dozen times before, and nobody had ever questioned it. My boss backed me up. The builder called his designer (a woman who is not licensed as an architect or engineer) and asked her to provide the printout for the design of the structural members in question. She sent the response by e-mail -- with the information that she had made an error when she input the parameters initially, and that the structure as shown on the plans was not strong enough to meet code requirements.

Oops.

Garbage in, garbage out. With AI as it stands today, you can manipulate the result by changing the input prompt. I participate on a writers' forum that has an entire sub-area dedicated to discussing AI. Much of the discussion is about exactly that -- how to create a prompt to generate the most useful result.
 
Marco Califo said:
Is this discussion even needed?
I don't think so.

Absolutely. People are seeing how AI can process information and spit out pretty good answers. The trouble is the internet contains information down to a certain level. This is not very deep. Often we are to to do something general like trim cases, but not what cutter angle, material properties, rotational speed, jig setup, etc. So AI doesn’t know much of what is required to do stuff. Even things that seem detailed like car repair videos, never have pard numbers or how hard to swing what weight of hammer.

Then it lacks exclusion. If I tell you AA9 is great 300 win Mag powder, you exclude my info as I am an idiot. You likely wouldn’t even say why as I would be too big of an idiot to respond to. AI, likely could then share this data with another page or user.

How many AI generated pages do you trip on to searching stuff. You click away. AI processes this as a data point. Too much garbage out there.

Oddly, I have asked chatgpt for some risky reloading information and it seems to know to limit specific answers at this time.
 
IMHO--AI isn't some revolutionary binary brain/thinking system--it's simply the net result of decades of companies like Google and facebook sweeping all YOUR data and warehousing it in massive data farms for processing and making money off of. It is a concerted effort to capture and store "all there is to know" fed by everyone and anyone who interacts with the internet in any way. That is the backbone of AI.
 
I expect AI to start to feed upon itself. Errors in one AI system will perpetuate in all the others. It could become exponential.

Also, AI cannot experience things in a human way. It's the experiences that make us able to derive conclusions and suspicions about those same conclusions. We are then able to question everything. If we choose to do so. But it's our experiences in living and in life that make it possible for us to do so.

AI can't do that. Doubtful it ever will.

--Wag--
 
Wag said:
I expect AI to start to feed upon itself. Errors in one AI system will perpetuate in all the others. It could become exponential.

…but how will we as a society respond to the fatalities? Think about Tesla…on some level, we have decided that they can kill people to develop self driving through selling beta level software.

….or Facebook algorithms that cause teens to kill themselves. We blame parents, kids, school, but never blame social media
 
Idiots arguing about stupid

You cant fix stupid. Just outlaw stupidity, and impose the death penalty. The planet is overpopulated anyway, and COVID-19 fizzled.
Another avenue is to put AI in-charge of firing global nuclear weapons. Hint: That won't last long.
 
Back
Top