More AI stuff.... it is very very bad......

tanstaafl72555

This Member's Account Has Been Permanently Banned
Life Member
Joined
Mar 1, 2017
Messages
7,243
Location
Spring Hope NC
Rating - 100%
10   0   0
I posted in another thread an interaction with Chat AI I had re: an obscure philosophical discussion. For those interested, it was a classical apologetics argument for the existence of God called the "ontological argument." I wanted to see how the machine would respond to the argument, which is really quite elegant in its logic.

I was disappointed in that it not only failed to see the weakness in the argument-- for the record, if you are going to argue something, it is better that you know the weaknesses of your argument than learn them in public debate.... ask me how I know! -- but it deliberately MIS construed the issues, misstated them, and deliberately .... well.... it just LIED. I was quite surprised at this, so I returned to the points made and upon confrontation, the machine "apologized" and admitted yes, the original point I made was correct. I was a bit flabbergasted, and posted in the thread that maybe this was due to the programmers building in errors of some of the authors of works which were considered authoritative, but that maybe it was due to being in a backwater issue (I mean, how many arguments have you had at the gas station or the work water fountain about the validity of the ontological argument lately? Not exactly mainstream). Maybe stuff just slipped in because it is not a highly contested point? Not so.

I ran across this on twiter this a.m. This was absolutely horrifying to me (it should not be, I know, but it is)



is a discussion on more of a "hot button" issue today, which concerns Covid 19 and the effectiveness of Hydroxycholoquine as a treatment. It is a hot button issue because of the polarized nature of many social issues today, stemming from a deep distrust of the power structures of media/politics/academia/law/media/medicine, etc.

You should go to the article and read it. There is no other way to characterize this than "the machine deliberately lies." What is worse, the machine lies with authority and presumed confidence and trustworthiness. It will "correct" itself when called on it (like my conversation re: ontology), but only when called out on it. Its explanation is risible in that it says "I don't ever deliberately lie, and I am sorry for my error" (!!!!!).

This should not surprise me, in that the machine is a creation of the mind of man, and men lie. It should not surprise me that extensions of the collective intellects of men would reflect that same dishonesty, but I am naive, I guess.

I am not filled with indignation and scorn, but rather a sadness that so many people (including people in here, you know who you are) insist on trusting in institutions of power and perceived respectability and honesty, when it is clear that these are hopelessly corrupt. I guess that if the price of honesty and clear thinking is having names thrown at me of tin foil hatter and crazy conspiracy theorist... then that is just the price, I suppose.

You cannot trust anything today. Especially not anything representing power and status.
 
It does not surprise me. On any contested issue of importance, people lie to bolster their point. AI is drawing from the published works of some who lie, and I suspect that it is programmed to give some sources more weight than others.
 
You should go to the article and read it. There is no other way to characterize this than "the machine deliberately lies." What is worse, the machine lies with authority and presumed confidence and trustworthiness. It will "correct" itself when called on it (like my conversation re: ontology), but only when called out on it. Its explanation is risible in that it says "I don't ever deliberately lie, and I am sorry for my error" (!!!!!).

Computers execute programs. As @Ferrisfan pointed out, the programming appears to tilt toward a particular point of view. I am only amazed that the programming allowed the computer to acknowledge its lies when confronted rather than simply denying the existence of the truth. That fault with the programming will undoubtedly be corrected.
 
1. this is how AI becomes your doctor.....
quote:

"...it demonstrated features like "alignment with medical consensus," reasoning ability,
and even the ability to generate answers that were preferred by respondents over physician-generated responses."


2. eliminates jobs:

"IBM CEO Arvind Krishna said the company will likely pause filling nearly 8,000 jobs
because the positions could be performed by artificial intelligence within the next few years."


3. finally, sounds like you:

".... allowing them to create “a synthesized voice that sounds like them”..."

 
Computers execute programs. As @Ferrisfan pointed out, the programming appears to tilt toward a particular point of view. I am only amazed that the programming allowed the computer to acknowledge its lies when confronted rather than simply denying the existence of the truth. That fault with the programming will undoubtedly be corrected.
undoubtedly
 
2. eliminates jobs:

"IBM CEO Arvind Krishna said the company will likely pause filling nearly 8,000 jobs
because the positions could be performed by artificial intelligence within the next few years."


There are several youtube videos describing how the AI can generate code faster than humans. It still takes a human to check through and tweak the code for most things though.

I have also talked with several people that have used AI to generate code for their application and these people constantly tell me how simple it is to use the AI for this.
 
Computers execute programs. As @Ferrisfan pointed out, the programming appears to tilt toward a particular point of view. I am only amazed that the programming allowed the computer to acknowledge its lies when confronted rather than simply denying the existence of the truth. That fault with the programming will undoubtedly be corrected.
Are we on the cusp of adding the secret sauce to finally emulate human "intellect": cognitive dissonance?
 
All the Chat AI's are filtered though programs designed by people. AFAIK, no one publicly has access to an open AI system that is not run through a filter. So the response is only as good as the program that it's running the response through. Conversely, it's as bad as the system its running through.

AI learns from watching us online. We lie. A lot.

Humans want to win arguments/discussions. I wonder if AI has picked that up too.

No one, and I mean no one, is building in any capability for AI to apply morals or compassion to its logic. It is possible. But it's not currently being done AFAIK.
 
Went and read the Brownstone Article. I wonder if the programmers are telling it to "weight" information based on sources? Biasing the responses in a certain direction. Those answers seemed to bias towards HQ does not help hospitalized patients. The reality is HQ was never said to address hospitalized patients with sever Covid. It was always an early onset or prophylactic.

That AI bot has the exact same spin as the media and Gov towards HQ information. That's other programming, weighting responses (programming), or it's weight information based on how much of it is out there. And let's be honest, there is more info bashing HQ in the media than supporting it. Which I feel is wrong and wrong headed BTW.
 
It does not surprise me. On any contested issue of importance, people lie to bolster their point. AI is drawing from the published works of some who lie, and I suspect that it is programmed to give some sources more weight than others.
And it's trained almost exclusively on data acceptable to the leftists making these things I'm sure.
 
That AI bot has the exact same spin as the media and Gov towards HQ information. That's other programming, weighting responses (programming), or it's weight information based on how much of it is out there. And let's be honest, there is more info bashing HQ in the media than supporting it. Which I feel is wrong and wrong headed BTW.
Sounds like it is mimicking Google's search results.
 
All the Chat AI's are filtered though programs designed by people. AFAIK, no one publicly has access to an open AI system that is not run through a filter. So the response is only as good as the program that it's running the response through. Conversely, it's as bad as the system its running through.

AI learns from watching us online. We lie. A lot.

Humans want to win arguments/discussions. I wonder if AI has picked that up too.

No one, and I mean no one, is building in any capability for AI to apply morals or compassion to its logic. It is possible. But it's not currently being done AFAIK.
My brother told me that he saw an example where AI did a paper and simply made up references to books that did not exist. In honesty, I never saw nor asked him for a reference for this, so it may be another "internet apocryphal" story......
 
I tried to have ChatGPT write a new CV for me; I wasn't impressed.
 
My brother told me that he saw an example where AI did a paper and simply made up references to books that did not exist. In honesty, I never saw nor asked him for a reference for this, so it may be another "internet apocryphal" story......


Isn’t that the story where some attorneys had AI do a legal brief? Was in the news a couple weeks back.
 
Not yet ready to short Microsoft or NVIDIA, but it makes you think..............................

"There's a sucker born every minute" -- PT Barnum
 
Last edited:
I tried to have ChatGPT write a new CV for me; I wasn't impressed.
CV’s are for folks with PHDs and such, with all their publications cited. Lots of pages typically.

Ask it to write a resume, and see what it does. Could be interesting.
 
Last edited:
CV’s are for folks with PHDs and such, with all their publications cited. Lots of pages typically.

Ask it to write a resume, and see what it does. Could be interesting.
I'm using all my posts on the internet as my publications. Pretty solid stuff.
 
SO the gist is that AI is just like Human Intelligence, They will lie based on their limited experience, knowledge or predisposed position via data. Not surprised. I will trust AI like I trust people.............sparingly.
 
SO the gist is that AI is just like Human Intelligence, They will lie based on their limited experience, knowledge or predisposed position via data. Not surprised. I will trust AI like I trust people.............sparingly.
Only there’s zero chance of of any machine possessing any morality.
 
No need to worry about AI at all. Since we are going to eliminate all our energy sources to save the planet, and go back to Medieval lifestyles the machines won’t have any electricity to operate. Problem solved.
 
Only there’s zero chance of of any machine possessing any morality.

What is currently called AI (technically “large language models”) is just a very sophisticated statistical model of the text inputs that trained it. That’s it. All of it. No magic, just very complex software is creating the model, but that’s all it is. Unless there is a programmer intentionally putting a finger on the scale, the actual code is a all about creating a model out of all the training inputs. It will have errors in its “knowledge” to the extent that it’s training data did, and it will create outputs that “lie” in the same ways that the training inputs did. So if you train it with MSM/FB/Twitter/etc… you’ll get an AI that behaves like those sources.

There is no magic like “morality” involved. There is no sentience or consciousness. There are no values or judgment calls involved. Just algorithms to create/update the model and to respond to prompts based on the resulting model.

The interesting question is what we will discover about human intelligence in the process. How different are we from the description above, really? I don’t know. Not sure I want to.
 
How different are we from the description above, really? I don’t know. Not sure I want to.
That which was placed in all of us, that we can not place inside of a machine.

Make of that what you will. I know you and I are of the same mind on some of this.

Perhaps I expect too much of humanity
 
chat AI learn what is correct and incorrect by basically surfing the net of published data and discussions. How often have you found bs opinions presented as facts? Facts taken out of context or misconstrued by obvious omissions. Whether by ignorance or untruthfulness. Think about that.
 
Only there’s zero chance of of any machine possessing any morality.

Sentient morality? No. Not in a mechanical based system that we currently run.

But programmed pseudo morality that weights protecting humanity above strict logic? Yeah. That can be done. But it’s not being done. That’s why you see some of these folks walking away from it.

Because if AI ever takes over its own Programming and learning without those measures it could get bad in a hurry.
 
So, we're going to be electing them to political roles?





Just kidding. Machines can't be pedophiles.
The depth of this casual observation is frightening.

Not of you, but of “us".

egad
 
Last edited:
chat AI learn what is correct and incorrect by basically surfing the net of published data and discussions. How often have you found bs opinions presented as facts? Facts taken out of context or misconstrued by obvious omissions. Whether by ignorance or untruthfulness. Think about that.
This is false. Ask chat ai what is the current data on a topic that would require a google search and it will tell you plainly that it is unable to search the web for data.

Here is an actual chat with AI to illustrate the point.

<ugly dude's picture here>

My question: Are you honestly telling me your AI language models prohibit you from doing simple concatenated word searches?




Machine answer: I apologize if my previous responses have caused confusion. As an AI language model, I can process and understand concatenated word searches. I can also provide general information on a wide range of topics. However, it's important to note that I don't have direct access to specific websites or real-time internet browsing capabilities. My responses are generated based on the data I've been trained on, which does not include the ability to browse the internet or access specific articles or sources.
If you have any specific questions or if there's any other way I can assist you, please let me know, and I'll do my best to help you.
 
Last edited:
I was standing in shower wondering a few things about AI, or I guess more specifically ChatGPT.

Is “it” omnipresent, if that is even the right word. I mean, if 10,000 people across the globe wanted to enagage in dialog with “it” at once, can it do that?
 
What data do you think they used to train. It is not real time browsing but cached information.
Wondering same. I just did a cursory web search and didn’t readily see an answer, especially if the reply “it” provided to Tans above is accurate and the data doesn’t come from “its” own access to the web.
 
Back
Top Bottom