Jump to content

Artificial Intelligence - The Sorcerer's Apprentice


Recommended Posts

Do you remember The Sorcerer's Apprentice from the Disney production Fantasiahttps://www.youtube.com/watch?v=B4M-54cEduo The basis of the story is that Mickey Mouse takes the power of the Sorcerer's hat and uses it to get a broom to do the job of filling a water tank for him. Somehow, the number of brooms increases and soon the tank overflows, flooding the place. Mickey is unable to stop the brooms, and the situation is only resolved when the Sorcerer returns an use his powers to stop the brooms.

 

A very similar thing is happening with Artificial Intelligence(AI). The very many applications of AI are, for the most part, useful. However, there is a price to pay. Artificial Intelligence demands incredibly large amounts of computing power, which is provided through data centres housing banks and banks of server units. These units require electricity to operate them, and therein lies the problem. The generation of electricity requires power of some sort. At the moment, worldwide, fossil fuels are involved in providing that power. The impact of AI on electricity demand is well documented. Electricity demand is forecast to grow as much as 20% by 2030, with AI data centres alone expected to add about 323 terawatt hours of electricity demand in the U.S.

 

In its 2024 environmental report, Google announced that its emissions surged nearly 50% compared to 2019, marking a notable setback in its goal to achieve net-zero emissions by 2030. The company attributed the emissions spike to an increase in data centre energy consumption and supply chain emissions driven by rapid advancements in and demand for artificial intelligence. The report noted that the company’s total data centre electricity consumption grew 17% in 2023. 

 

Google is not the only major tech company to face increased emissions due to AI demand. Microsoft reported in May that its total carbon emissions rose nearly 30% since 2020 primarily due to the construction of data centres.

 

Combine this increase in demand with the demands of EVs, and the growth of electrically powered devices for personal and commercial use and it is pretty obvious that electricity generation will have to be increase at such a rapid rate that hopes for going all-renewable might be dashed to some degree, and we will be burning stuff well into the future in attempting to meet end-user demand.

 

 

 

  • Informative 2
Link to comment
Share on other sites

Posted (edited)

big Data centre is in the making , in sevenhills according to " rumours " .

Huge earth works around here , heading to the " Oaks road " substation. 

As well as into our residential streets, in the "Settlers Creek reserve " they have six or seven poli-pipes in trenches, with concrete inspection boxes every so often .

Possibly for 'optical ' cabling.

YET we have a dismal NBN CONNECTION,  with never a month without a breakdown , in the service .

spacesailor

PS : they call the NBN  fast . A little village ( Michaelson )in Wales UK .

Has a 1Gb  download speed .

Edited by spacesailor
  • Like 1
  • Informative 1
Link to comment
Share on other sites

MS is standardising low-energy settings into its software. I recently advanced to MS Windows 11 and I was surprised to find how quickly W11 shuts down my screen and turns the computer into sleep mode, within minutes of me leaving the desktop running, while I go do some other task. I actually find it quite annoying, because it takes an excessive amount of time to get the computer screen back, and then I find I have to log in again.

Link to comment
Share on other sites

1 hour ago, facthunter said:

KILL the NBN

You've got to separate the NBN from the discussion of Artificial Intelligence. The NBN is merely a physical system used to enable the transmission of data. Artificial Intelligence is a software system that manipulates data. That's why it needs extensive server farms which are electricity gluttons.

  • Like 1
  • Agree 1
Link to comment
Share on other sites

Since we were forced onto the NBN system, we have never had two consecutive months of service. And I do miss having a proper phone with it's " fax " service. 

windows has/should have it in their system , but is too hard to use .

spacesailor

 

Link to comment
Share on other sites

Posted (edited)

The NBN itself was an excellent investment and has proved very reliable where it was properly installed (Fibre to the Premises - FTTP, or Fibre to the Curb - FTTC). But a lot of the NBN installation effort was half-arsed, with the "Fibre to the Node"  (FTTN) decision by the Liberals. This ensured that the benefits of fibre were lost, as a large chunk of the signal transmission was retained on an aging copper network, between homes and the Nodes. 

 

I'm on FTTC and the optic fibre cable runs along the footpath past my house, and I only have copper between the house and the footpath. I'd really like to go FTTP, but it's not a priority for me at present, as my FTTC service is excellent, and has provided 99.9999% reliability for the 4 or 5 years it's been in.

 

Edited by onetrack
  • Like 2
Link to comment
Share on other sites

I have to correct my post. it was DESTROY, Not kill. but it cost billions for the Australian taxpayer and set us back years. Immeasurable losses in performance and reliability. Deliberately falsifying any story is FRAUD. You commit a crime if you falsify an advert. Why stop at that.? . Surely  trust and honesty is worth striving for.. Shine lights in dark corners and follow money trails and protect Whistle blowers. Catch the CROOKS.   Nev

  • Agree 2
Link to comment
Share on other sites

Posted (edited)

I am still on the Old copper wire. No upgrade here . Just a slow & intermittent service.

My cell-phone is lots faster with no ' reloading ' waiting time , unlike the TV often pausing,  to catch-up that download .

1 Gb is the UKs fastest download speed at  30 pound p/m .

That is what they charge me , without any deductions for their blank days .

No phone , to tell them the internet is down again. Without even the TV to tide us over . Have to have an independent person to tell my provider . IT'S OFF AGAIN .

spacesailor

PS : they say it's progress !.

 

 

Edited by spacesailor
  • Sad 1
Link to comment
Share on other sites

facthunter

I wish Dutton can Finnish his job & get rid of this " nbn " & let some competition in .

I very much doubt it will be worse than what we have .

All down together. Phone, Internet, & treamed TV. 

spacesailor

  • Sad 2
Link to comment
Share on other sites

Back to AI.

At the moment I'm reading a non-fiction book by Toby Walsh FAA FACM FAAAS FAAAI FEurAI, Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney. The book deals with the morality of AI. Morality is a human trait that cannot be defined in simple terms as each person has their own version of morality. AI is a hold-all name for the software that enables a task be carried out by a machine. These tasks the software enables machines to do range from controlling the sequence of tasks needed for a washing machine to launder some clothes, through checking all the necessary material needed to approve a building development, to all sort of warlike actions.

 

The point Walsh makes is that presently it is impossible for a machine to develop its own morality. Therefore the morality of the actions of a machine are determined by the person who wrote the programming code the machine uses to complete its task. We know that for many beneficial inventions of Mankind, someone will take the same invention and apply it for a harmful purpose. Despite all efforts of those involved in the development of AI, and governments attempting to frame rules and regulations to try to prevent malevolence, our experience in so many other areas of human activity tells us that rogues will misapply AI. Do you think Trump, Putin, or Kim Jong Un would decline to misuse AI if they thought they could get away with it?

 

Along with malevolent applications, there is also the innocent problem of unintended consequences. A current example is the use of AI for the autonomous operation of motor vehicles. This use really is still in its beta-testing stage to some degree, and it is the occasional failures that lead to unintended consequences. I'm sure other examples in other applications can be found. AI professionals and governments must also develop measures to minimise these unintended consequences.

 

By the way, the book, "Machines Behaving Badly: the Morality of AI", is quite boring and I'm going to finish reading it out of sheer cussedness.

  • Informative 1
Link to comment
Share on other sites

Plenty of humans don't behave ethically or morally. Only do to others what you would like done to you is a good start. If someone restrains AI and that's good what stops other from NOT doing it and allowing unrestrained Mayhem?   There's a wise saying. Don't start things you can't STOP.  Nev

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

So, what do you propose, Asymeo? Kill all the programmers working on building AI products? Destroy all the company headquarters that are working on AI? AI is the Pandoras Box of our time, AI has been invented, released, and is in our midst. There is little we can do to stop its advance. The scary part to me is when AI becomes involved with war action or is put to evil use by power-mad dictators.

This is where we need to focus our attempts to control it - or perhaps destroy it, if it being used to assist dictators to stay in power, control the narrative as regards information being fed to people, and to manufacture worse and worse machines for war.

I note Ukraine is using robotic dogs on their front lines, this is just the start of the use of AI in war. Worse things will come from AI when the Russians and North Koreans work on development of it for their war machines.

  • Sad 1
Link to comment
Share on other sites

I don't fear A-bombs because I believe, no matter how crazed the dictator, there's no dictator crazy enough to unleash nuclear weapons, because they know it would spell the end of their country and their regime when the nuclear response comes from America. But I do fear the lesser war machines that can inflict widespread misery and destruction continuously, without the total destruction that accompanies a nuclear weapon.

  • Like 1
Link to comment
Share on other sites

Well A-I is here. Numerous versions are active. Now that it is here, the AI tiger can be said to be out of it's cage. It cannot be 'destroyed'. There is little possibility of controlling it for the age old reason... 'Who watches over the watchdog?'.

 

Now that this potentially dangerous product is with us, we must do what has allowed humans to prosper so well up to now.

 

We must adapt to the changing world around us..... Look for risks, and personally avoid them. We do not have an IT god to trust, to look down from high and protect us from nasty IT.

 

There WILL be people using technology to further their own nefarious agenda. And ther will be good people who use it for good outcomes.

 

It is up to the individual to refine their BS detectors, and discriminate between good and bad. These new systems make that harder than ever.

  • Like 1
  • Agree 1
Link to comment
Share on other sites

Spacey - Ernest Rutherford was born in N.Z. but spent all his adult life from age 24 onwards, in the U.K., working as a research scientist. He discovered many things that led to better understanding of nuclear physics and nuclear reactions, but he apparently never saw the possibility of a nuclear weapon - in fact, he saw no real use for nuclear reactions, apart from making interesting scientific observations and findings. Rutherford died in 1937, before any nuclear weapon was devised.

 

https://www.britannica.com/biography/Ernest-Rutherford

 

The bloke who actually found - and patented - the neutron-initiated nuclear reaction, was Hungarian–American physicist Leo Szilard. He was also the one who realised the implications of the find, becoming horrified at the potential construction of nuclear weapons. He sold his patent to the U.S. Govt to ensure it stayed in Govt hands and under Govt control.

 

https://physicsworld.com/a/leo-szilard-the-physicist-who-envisaged-nuclear-weapons-but-later-opposed-their-use/#:~:text=One day in September 1933,atomic energy for practical purposes.

 

But it was J. Robert Oppenheimer who finally devised and produced the A-bomb - with the total support of the U.S. Govt, and at a reported cost of US$2.5B in 1945 dollars. The "Manhattan Project" was a project that seriously taxed the U.S. Govt in financial and resources terms. Oppenheimer became a Pacifist after seeing the results of nuclear weapons being produced by many countries, some of whom should never have acquired the technology.

 

https://en.wikipedia.org/wiki/J._Robert_Oppenheimer

 

  • Informative 1
Link to comment
Share on other sites

10 hours ago, Asymeo said:

Artificial Intelligence does not have emotions or no morality

And it never will..

 

10 hours ago, Asymeo said:

At the moment their programmers control AI actions.

Sort of - of course there is machine learning where the software adapts its output by learning responses it has received. The MS chatbot, Tay had to be shut down as it adapted its output to the inputs it received: https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/.

 

However, yes, there has been biases found in AI systems. One I recall was that AI based credit checking in the US would deny black couple mortages but allow white couples with almost exactly the same credit metrics. In addition, there was some suspect profiling software in the US (not sure it was technically AI) that seemed biased against, you guessed it, black suspects.

 

17 minutes ago, nomadpete said:

We must adapt to the changing world around us..... Look for risks, and personally avoid them. We do not have an IT god to trust, to look down from high and protect us from nasty IT.

Absolutely true - we are already there isf you think about social media and the law only just starting to catch up well after the damage has been done. AI will exacerbate it. I have adopted an educative approach with the kids; sometimes quite firm in its application, but needing to emphaise the dangers of how perceptions can be formed when your environment is boimnarding you with the same messages and then the algorithm amplifies them, with little presentation of countering views or facts.

 

We must rememebr AI is just that. As it increases, and processing/data storage poower increases (think quantum computing) and it wil be able to much better run the probability calcs that give it the answers.. and it will seem closer to having morality.. but it won't.

  • Agree 1
  • Informative 1
Link to comment
Share on other sites

I think I said in a earlier post that anything Mankind devises can as equally turned to evil as it can be beneficial.

 

Morality might not be the correct concept to apply to AI. It seems that the correct concept is Ethics. Both morality and ethics loosely have to do with distinguishing the difference between “good and bad” or “right and wrong.” Many people think of morality as something that's personal and normative, whereas ethics is the standards of “good and bad” distinguished by a certain community or social setting. Normative ethics is the study of ethical behaviour and is the branch of philosophical ethics that investigates questions regarding how one ought to act.

 

A tool used in studying the ethics of a decision produced by AI is the "trolley problem". The trolley problem is a series of thought experiments in ethics, psychology and artificial intelligence involving stylized ethical dilemmas of whether to sacrifice one person to save a larger number. The series usually begins with a scenario in which a runaway tram or trolley is on course to collide with and kill a number of people (traditionally five) down the track, but a driver or bystander can intervene and divert the vehicle to kill just one person on a different track.

 

Any decision or recommendation produced by AI that would need consideration of the ethics of applying the decision or recommendation requires human input because humans have an ethical nature, be it good or bad as determined by the society the human operates within. Where a decision or recommendation produced by AI has an ethical impact, the AI software cannot make that call.

 

I do feel that AI would be involved in a very low percentage of decision or recommendation making where ethics is involved. For the most part, AI is simply a tool to be used to work through an enormous amount of data quicker that could be done by humans. Software like ChatGPT can write an essay for you, but the essay is only a compilation of previously published material. You must have the spark of originality to end up with an A+ essay.

  • Informative 1
Link to comment
Share on other sites

To digress from AI to the trolley problem.  Here is a video where they actually try to run the experiment and it seems that it is not as straightforward as 1 life vs 5 lives. By the way, I would handle the trolley problem by waiting until the front wheels had passed and then switching before the back wheels got to the switching point thus derailing the train or trolley.

 

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...