113 Comments
User's avatar
DawnieR's avatar

I am so sick of hearing about 'AI'; whether people think that it's great, or hate it.

Sorry, Clif.....but just as so-called 'lie detectors', I don't want a MACHINE declaring that it is FACT that someone is telling the truth, or is lying. BECAUSE.......

There will ALWAYS BE a large segment of the population that WILL 'pass' these 'tests', even when they ARE LYING!! And these individuals are called SOCIOPATHS & PSYCHOPATHS!!

Expand full comment
jacquelyn sauriol's avatar

I agree with your comment wholeheartedly. Sometimes smart people have gullible blindsides like any other human.

Expand full comment
leo sullivan's avatar

of course we can all lie convincingly since we all convince ourselves we matter and are real; but to Clif's point, does not reality also have ways of ending our cycles of delusion? all we need is the gift of cynicism here

Expand full comment
Thunderhoof's avatar

🏆

Expand full comment
User's avatar
Comment removed
Dec 7
Comment removed
Expand full comment
DawnieR's avatar

Exactly!

And as Clif has said, numerous times, "AI just just dumb/stupid.".

And as long as a "Human" is programming something, it cannot be trusted.

Expand full comment
linda smith's avatar

letting ai define your fate sounds like a slippery slope, no thank you......

Expand full comment
Aheinousanus's avatar

I just had a conversion about it with a coworker. Both of us are computer programmers. I was telling him about ChatGPT and that I often use it, instead of a search engine as it return actual information, instead of bunch of links, most of which have useless garbage.

He pointed out the dangers. If we start using such systems for information, then it will be extremely easy to control us. Not all of us, but vast majority.

It will be worse than today and most people already believe what they see on TV and read in newspapers and online.

Expand full comment
Moose Chop's avatar

Uhhh search engines sensor information as far as I know. The question is how does one bypass the gatekeepers.

Expand full comment
Kristen's avatar

Thanks Clif. I would rather use my own energy for detecting bullshit. It is correct every time, no matter what form it is, bullshit energy has no structure no matter how many little truths it is wrapped in. Thats the real woo...not needing ai when we can do it ourselves👍

Expand full comment
Robyn K's avatar

Yes, we can detect lies through our own intuition. We don't need a machine to do it for us.

Expand full comment
leo sullivan's avatar

but isnt believing we are always correct also a sort of lie?

Expand full comment
Kristen's avatar

Knowing through your own energy has nothing to do with believing. When you know you know...

Expand full comment
Jay pee's avatar

Some of my favorite daily listening

Expand full comment
Dan Swaim's avatar

Everybody lies,

Hope is for sissies

The unicorn is just a donkey with a plunger stuck on his forehead.

Gregory House, MD

Expand full comment
Keeper Of The Light 1111's avatar

It is stupidity as AI can’t make discernment between emotions and emotional responses. It is not trained and functionally working to understand feelings, emotions and emotional states, because it has no soul as a human has. The best lie detector is a human itself, as it can see through the bullshit of any other human without analysing the actual human. Feelings and intuition can never be replicated by an artificial intelligence

Expand full comment
Ralph's avatar

What percentage DID NOT see through the lies during Covid?

Expand full comment
leo sullivan's avatar

do you mean what % of machines perceived the lies? bc the so-called 'pure blood' i do believe are nearly 100% human

Expand full comment
jo and rod's avatar

I think it is Clif wife's birthday on the 7th December, so please wish this wonderful lady a very HAPPY BIRTHDAY for me, hope she is well.

Expand full comment
The Ram's avatar

Intuition may be the best tool in this case. It's my 'go to' tool - over any other knowledge instrument - digital or not.

Expand full comment
pyrrhus's avatar

I have tested AI with several questions...Every time, its answer has been half-baked or partially incorrect...The LLM is not adequate for even moderately complex questions...

Expand full comment
Aheinousanus's avatar

First, the is no AI in existence. What they are calling AI are just complex programs They can however be very useful. I had used ChatGPT, Bing's Copilot and Google's Gemini.

They however are not perfect. Far from it. I am programmer and do have rudimentary understanding of networking. Rudimentary.

I have used all 3 to set up complex configuration on an access point and it was an extremely frustrating experience. For one, all three were giving me commands from the old version of the OS and continued to do so after I told them. They would reply with "You are absolutely correct, here are the corrected commands", and give me the exact same code as before.

However, it was very useful in explaining what the various commands were, describing the architecture of that particular OS.

Expand full comment
JerryDechant's avatar

All AI is/does is data processing. How that data is processed is determined by the programing, which is done by a fallible human. AI does not experience anything, it just compares a to b, etc. If the input data is corrupted, the output data will also be corrupted to the same extent that the input data was corrupted. By corrupted I mean is not experientially factual. Since AI can't experience anything, it cannot validate the data which required experience to authenticate the data. But it can go on and output a conclusion based upon the preponderance of evidence presented in the data it has analyzed. So, it is assuming that the predominance of evidentiary data equates to experiential validation, but it does not. The AI is as fallible as the programmer who wrote the program because a fallible person can only achieve to the level of his/her fallibility.

It is sort of like AI can simulate a Sun, but that simulation, no matter how accurate the data which defines the makeup of the simulation is, the simulation can never BE the actual Sun.

As fallible as human are, I'd trust a human a lot more than a computer program such as an Artificial Intelligence program. (Artificial = not real)

Expand full comment
Lynda's avatar

Thank You.

Expand full comment
Aheinousanus's avatar

A lot of times I call it Artificial Idiot.

Sometimes I listen to videos with "AI" doing the reading. I will see a sentence that starts with "I (26M) and my...) and the Artificial Idiot will read it as "I, 26 meters and my....)

Meters...f'ing idiot. I usually just click off the video at that point.

Expand full comment
The WATERMAN FILES's avatar

Actually I really dislike the pictures generated by AI...

Expand full comment
Tuesday with Philberg's avatar

David Hawkins says it is possible to tell truth from falsehood and has a book out on it. The mechanism is that muscles react strongly to truth, whereas falseness the muscle go weak. That ain't no shit - check out:

https://philberg.substack.com/p/live-your-life-like-a-prayer

Expand full comment
leo sullivan's avatar

Score! ty Tuesday

Expand full comment
jacquelyn sauriol's avatar

that makes sense to me, a desire to trod lightly on thin ice perhaps....

Expand full comment
Godparticle's avatar

Seems to be the wrong link

Expand full comment
Richard DuPlessis's avatar

Yur only answer should be, "I reserve my right to silence".

Expand full comment
WC's avatar

I use AI all the time. It waters my lawn and does a great job

Expand full comment
sharonmo's avatar

You can vacuum your house and entertain your cats with it.

Expand full comment
RedHoney's avatar

Once we have a (reliable) tool to identify the liars...or, let's just narrow that to the psychopaths, what would/should society DO with that ability? Tattoo a mark on their foreheads to warn the rest of us? Ought we create laws that forbid them from ever holding any position of responsibility? What about letting them procreate which increases potential for global catastrophe? With great knowledge comes great responsibility. Before we became "civilized", we used to live in small groups where everyone pretty much knew everyone else. Members knew who among them were truly evil and there was no such thing as a legal system to essentially protect them from accountability by issuing them rights, laws, courts, etc to act as shields from the mob's anger they rightly deserved. The men might go on a hunting expedition and arrange to push the psycho off a cliff with none of the fear of repercussion in today's society. A neccessary culling is an act of kindness to the other members, a service. And sorry if this sounds offensive but it is pretty obvious that we've always had murderers among us (psychopathy is about a steady 3%, and some of these are going to be murderers). Normal rage and pettiness would produce occassional acts, but by and large, the things that psychopaths do are not compatible with a harmonious community. Psychopathy has exploded in numbers with the increased isolation provided by modern society. We often don't even know who lives beside us, this anonymity allows for longer life expectancies, but probably more of the increase is due to the preferential selection they give to one another. Once they gain a position in a Human Resources/hiring dept, they might start selecting for similar deviously minded. How else can we explain the numbers hired in certain professions, professions where they enjoy power over others...think about it.

Once we have a tool for identifying, objectively, that can be widely (cheap and practical) used, we have a duty to purge them from our midst, or at least mark them. There is no end to the death and destruction of these monsters will wage, no satiating their appetites. Even now it may be too late to stop what they're doing.

There is only one objective test for psychopathy that I know of, it is discussed in the early part of Martha Stout's "The Sociopath Next Door". It does work, but is impractical as a useful tool. Maybe this AI will work and be adopted. But we should think about the next step, what will/should we do with this new tool?

Expand full comment
Aheinousanus's avatar

There is a difference between a sociopath and psychopath. Both are dangerous but psychopaths are worse.

You can create a sociopath but psychopaths are born.

Ironically, I have this concept of Righteous Psychopath. An individual who is a psychopath but has his "moral code" that what can be seen as universal.

I have never heard anyone use the term, but there were plenty of books and movies where the hero is such an individual. Old westerns for example. The hero will kill anyone who is evil towards others and has no emotional about it. A woman is attacked, he kills the attacker, the woman is grateful but he shows as much emotion towards her as he does towards a wooden post - none.

Expand full comment
Phenm's avatar

Can you define/explain her 'test' ???

or...

provide a Quote/link ???

THNX

Expand full comment
RedHoney's avatar

The objective test for psychopathy is described in the book mentioned above and involves an EEG while asking the test subject to consider some provided materials that contain emotion content. The science is based on the discovery that psychopaths process emotional content in an entiredly different way than non psychopaths and this difference is visible in their EEG read out. She said it is like their brain does not generate emotional responses the way ours does but rather the input is processed in another area which then intellectually decides the appropriate, outward response. It's been a long time since I read her book but I'm sure that's the gist of it. And I'm sure much more has been done in this field since. However, it might not be realistic to expect it to be easy to find since I believe (my opinion only) that psychopaths have gained critical mass in so much of our world that they control much of the research and what would get released and which parts kept secret among themselves. I'm sure they would not want normies to discover their weaknesses or how to distinguish them in a way that could hold legal consequences. They prefer the opportunities our ignorance provides them. Get the book, it's very cheap. And when you're finished, pass it on to some young person you favor as it will arm them for survival better than anything else you could share with them.

Expand full comment
Clown Mojo's avatar

The one thing this could be useful for would be corrupt government officials, unless, they are able to have a corrupt ai to let their lies pass.

Expand full comment
Pride and Programming's avatar

Yes! Have it running live on every political speech!

Expand full comment