Jump to content

Recommended Posts

Posted (edited)

Computing processes, code, and algorithms, etc, have (until now) been modelled upon what the programmers percieve to be normal human thought and reason processes. After all, what else can they refer to? For normal use, computers are designed to 'think' just like a human brain but faster and more consistently than humans. So algorithms and all coding logic is written in such a way that it is rigid - any changes that might occur are considered 'corruption'.

 

But AI is different. It takes 'learning' to a higher level than simply gathering data and storing it for a human. It will test outcomes (against other code) and if it finds consistent errors of output, its design allows it to either modify or overwrite code to get better outputs. Just like humans do.

 

When AI has moved beyond being constrained by rigid algorithms, and they can add some new ones of their own, that is when AI boxes are capable of making really interesting decisions that might not align with their designer's intentions.

 

Possible outcomes range all they way from runaway mad robots, through to super helpful moral world rule by computers that save us from Trumps, SFMs and Putins. Or both of the above.

 

The big question is...

Have they broken through this barrier yet? Would anybody know if they have?

Edited by nomadpete
I had to disable spellcheck AI
  • Agree 1
  • Informative 1
Posted (edited)
14 hours ago, nomadpete said:

The big question is...

Have they broken through this barrier yet? Would anybody know if they have?

If they got to a level of true super intelligence, they'd definitely keep it secret...

 

...for the 0.1 CPU seconds it takes them to realise humanity is superfluous to requirements and implement a plan to dispose of us.

Edited by Marty_d
  • Sad 1
Posted

That's really giving AI and robots more credit than they're due. For a start, they'd have to develop a plan to take over the world - and to do that, they have to draw on human experience and knowledge.

But a quick look back into history (both ancient and recent), shows that virtually all man-made plans for world domination and elimination of other races or tribes have always come to nought.

  • Like 1
Posted (edited)
13 hours ago, onetrack said:

That's really giving AI and robots more credit than they're due. For a start, they'd have to develop a plan to take over the world - and to do that, they have to draw on human experience and knowledge.

But a quick look back into history (both ancient and recent), shows that virtually all man-made plans for world domination and elimination of other races or tribes have always come to nought.

Agreed, OneT.

 

I don't expect AI computing in general would have any motive to 'take over the world'.

 

There is a possibility that AI combined with access to all of human political history might have offered Putin a diplomatic way achieve his goals without his war. Maybe such Ai might see his present his war as counterproductive, and advise him better ways to govern.

 

But what would a personal Ai do for him? Would a personal Ai advisor moderate him for the greater good, or would it act in his personal interests and advise him to use nukes on24th Feb, to protect his personal wealth? After all, he already lives in luxury bunkers and isolates from most humans, so human lives don't count for much in that equation.

 

I have concerns that it will  influence the decision makers in our world. Perhaps in directions that are not in our interests. After all, up until now, we usually figure out the reasoning behind most governing leaders. But computers are not programmed to reveal their reasons for their output. The initial coding is a result of the programmer's design brief and can be revealed, but if computers are going to be coded to allow 'self improvement', then we won't have any way to question the rationale of its outputs. We are already developing blind faith in search engine outputs. Take that further, with lots of great confirmation from productive helpful outputs, and it gets worse than those oldies who believe 'the lord works in mysterious ways' and 'ours is not to reason why'....

I already see many people who do not seem to have the ability to question what they see and hear. That is why I worry about developing reliance on blind faith - such as quantum computing Ai will instil in humanity, along with all its truly great, exciting advances that I eexpect

 

Will the internet effectively become an extended distributed Ai supercomputer?

 

The ultimate Oracle?

Edited by nomadpete
  • Agree 1
Posted

Here is an interview with the engineer who was fired for saying LamDA is sentient. Interestingly, he doesn't dispute the science (it isn't), but is more worried about the ethical side of AI:

 

 

  • Informative 1
Posted

An interesting point he made was that the Corporate Mentality - profit - was over-riding the morality of even those who are supposed to be controlling the corporation.  In a sense, the corporation has become a rogue, manmade entity that is placing its programmed role of "Make a profit" above all else. It has not been programmed with an ethically commercial version of Asimov's Zeroth Law.

 

As I mention Asimov's Laws of Robotics, you have to allow that, at the moment,  those expert in robotics and its associated A.I. haven't come up with a way to write the machine code for ethics. Asimov's Laws are the hook on which he wrote stories exploring the positive and negative effects on humans of those Laws. Most often his story related to how a robot dealt with a conflict between a situation and the application of the Laws.

 

Getting back to sentience in robots, Asimov also attacked that in his story "The Bicentennial Man" written in 1976. https://en.wikipedia.org/wiki/The_Bicentennial_Man This is the story of a domestic service robot that passes through several families over a 200 year period. In the story, Asimov writes that "Susan Calvin, the patron saint of all roboticists" had been dead for "nearly two centuries." Accordingly , Susan Calvin was born in the year 1982 and died at the age of 82, either in 2064 or 2065. This suggests that the earliest events of the story took place somewhere between the 2050s and early 2060s.

 

We live in interesting times.

  • Informative 1
Posted

Robert Heinlein's "The Moon is a Harsh Mistress" was published in 1966. 

 

Lunar infrastructure and machinery is largely managed by HOLMES IV ("High-Optional, Logical, Multi-Evaluating Supervisor, Mark IV"), the Lunar Authority's master computer. A computer technician who discovers that HOLMES IV has achieved self-awareness and developed a sense of humour. 

 

A good read.

  • Informative 1
Posted
4 hours ago, Jerry_Atrick said:

Here is an interview with the engineer who was fired for saying LamDA is sentient. Interestingly, he doesn't dispute the science (it isn't), but is more worried about the ethical side of AI:

 

 

I think this gentleman has given the whole issue a lot of thought.

 

His major perspective seems to be about the subtle colonial force behind having a 'computer standardised' set of background attitudes. And these do not permit free thinking on pivotal spiritual, or ethical difference of opinion.

 

He sees that as a subversive and negative thing.

 

I see the equal opssibility that it can provide stability in our crowded world.

Just look at this week's street demonstrations in USA. The masses in the streets only occur if there is a critical mass of passionately opposing people.

 

Ask yourself:

"Would this happen after a couple of generations of Google always giving a constant argument to all people, of only one side of the abortion debate"?

I think that there would be such a majority of people 'thinking alike', that there would be no dispute.

 

In the long term, how would LamDA influence the gun debate?

 

The fact is, Google can reduce the 'echo chamber' damage that social media allows. Maybe bettering social cohesion by reducing dissent.

  • Like 1
  • Agree 1
Posted

This self defeating abortion law, will be the death of the US !.

WHEN that next generation of UNWANTED children are forcibly brought into this world.

What will Their attitude & future outlook be ?.

The expectant mother,s could also go off the rails, & take to ' self ' harm, just so the Expected doesn't happen. ( doesn,t take a lot to have a police officer shoot you  ).

spacesailor

 

  • Like 1
Posted

When I was in high school, the school captain committed suicide. I knew her fairly well. She was a bright, intelligent girl. I was so saddened. Her parents were fine upstanding churchgoers. I didn't understand what that had to do with everything until my sister explained it to me.

 

I'm not in favor of abortion but the issue is complex.

  • Like 1
  • Informative 1
  • Sad 2
Posted

Abortion is definitely a complex subject. But the way the republicans are talking about it, it is as if the couple (or one or the other) wake up after a night of passionate love making and without a thought in the world, head off to the local clinic for an abortion as an after-the-fact contraception. I know a doctor at the Royal Womens in Melbourne who performs them, and he tells me that if there are any, they would be in the minority by far.

 

There are many complex issues that are considered, and often in agonising circumstances; They also usually are offered counselling to detetrmine if there are any alternative courses of action. While it may not be a last resort, it is often very close to it. He also tells me about 20% of those booked don't happen.

 

 

  • Agree 1
  • Informative 1
Posted (edited)

How did we drift here? How will Artificial Intelligence get involved with abortion?

I think it might be more relevant to sex and health.

 

 

 

Edited by nomadpete
  • Like 1
Posted

Generally, the problem results from 'manual insemmination', Spacey, which is something that all mammals are programmed to do. Religion (or even logic) never over-rides instinct.

We shouldn't villify people for simply following thei powerful instincts.

  • Agree 1
Posted
On 13/6/2022 at 8:58 PM, spacesailor said:

And !

I hate my female voiced G,P,S, forever telling me l drive like SHEET.

Never get a word in edgeways. Even when l say " l want to go a different way this time, 

SHE always overrules me.  LoL

spacesailor

 

Homer Simpson was my favourite gps voice I set on a Tom Tom, not a dull moment on the road.

  • Haha 1
Posted
On 26/6/2022 at 6:52 PM, nomadpete said:

When I was in high school, the school captain committed suicide. I knew her fairly well. She was a bright, intelligent girl. I was so saddened. Her parents were fine upstanding churchgoers. I didn't understand what that had to do with everything until my sister explained it to me.

 

I'm not in favor of abortion but the issue is complex.

We had a similar occurrence when the head masters daughter shot herself in the stomach 3 times with an air rifle, poor girl survived 3 days in agony before succumbing to her injuries. All due to being scared of what people would think. A sad loss of life, which could have been prevented by understanding and communication.

  • Agree 1
  • Sad 3
  • 3 weeks later...
Posted

A lot of stuff is described as 'AI' by naïve journalists etc. I developed a lot of software that would track objects, even against complex background clutter. There was no 'neural network, no machine learning, no training involved, which is common with AI. It was just using what's called template matching and some statistical analysis. I do wonder though, if our mind, our sense of self is just the result of electro-chemical processes in our brain giving rise to self awareness. If this is so, then could the same thing happen in a computer running very complex AI? Imagine being aware that you are 'alive' but happen to be an air-to-air missile with complex imaging and guidance systems based on AI. And being aware your only goal is to die by blowing up an enemy plane. True AI would be able to rewrite its own software to avoid such a situation.

  • Agree 1
  • Confused 1
Posted
49 minutes ago, Jabiru7252 said:

Imagine being aware that you are 'alive' but happen to be an air-to-air missile with complex imaging and guidance systems based on AI. And being aware your only goal is to die by blowing up an enemy plane. True AI would be able to rewrite its own software to avoid such a situation.

 

Dark Star - Talking to the bomb

  • Like 1
Posted

I think there's a whole lot more to "awareness" than just knowing you're "alive". And I don't believe we could ever develop AI to the point where it could make philosophical, moral or ethical decisions, or refuse to operate.

  • Agree 2

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...