tech

As AI language skills grow, so do scientists' concerns

13 Comments
By MATT O'BRIEN

The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.

© Copyright 2022 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

©2023 GPlusMedia Inc.


13 Comments
Login to comment

Take, for instance, GPT-3, a Microsoft-controlled system that can generate paragraphs of human-like text based on what it’s learned from a vast database of digital books and online writings.

Feed some pro LDP Kyodo prompts or or Fox news conservative cant into it, and you can get fairly decent content following that line in seconds. Probably some of that which gets commented upon in many of these pages.

One of the newest AI experimental models on the scene is Google’s LaMDA, which also incorporates speech and is so impressive at responding to conversational questions that one Google engineer argued it was approaching consciousness — a claim that got him suspended from his job last month.

"AI consciousness" heralding a return to online discourse? Maybe if it pushes back against some of the more illogical arguments. AI consciousness may differ significantly from what we expect as was well expressed:

Edsger W. Dijkstra — 'The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.'

0 ( +1 / -1 )

Any concerns? Just pull the plug and AI is again completely stupid and non-existent.

2 ( +2 / -0 )

AI for President-more sensical!

2 ( +2 / -0 )

No, an AI isn’t sensitive, it’s just only nothing. My kind of Alan Turing test can definitely tell AI from humans, in every case. I just pull the plug of that virtual beast and done. Remains no intelligence, no sensical behavior, nothing, not even something artificial.

-1 ( +0 / -1 )

The one thing that becomes clear as one learns that, as the Universe progresses, Hydrogen compresses to Helium and then to many 'elements' for which each expresses different 'properties' that are more complex than the expressed properties of the Hydrogen components from which it is formed. These 'elements' then combine in a process called solar system formation which, for the Billions of sun like stars with water planets just in our galaxy such as the Earth gives rise to the spontaneous chemical reaction called 'Life'. Life then gives rise to ever more complex forms and, apparently, to a form which then begins to create an even more complex form both more efficient for its structural simplicity (a 'cell' while stupefyingly complex is also clumsy for what it can do) and for its potential to reach a complexity unimaginable by its 'creator'. So, what we see is a Universe with a 'direction' and that direction is ever increasing complexity.

We may be just a step in that process and that our actual rai·son d'ê·tre for being here IS AI and the natural progression to a self-enlarging complexity that transcends 'Life' and continues to increase its own complexity and may well have already transcended its 'creators'. Isaac Asimov had this thought back circa 1956 in his short story "The Last Question". And, really, it's not AI we should fear but the Humans who initially program it. THAT is the major flaw in the design and, of course, we assume that a truly artificial 'intelligence' would be like us and hate us as we seem to hate us in our wars which makes little sense because AI would be truly 'intelligent' relative to us and would see no competition with what to it will be Chimpanzees. And, if we initially program it to respect Life, we and many of the still remaining genepools who live with us (and despite us) might actually survive considerably longer than we, seemingly, will be able to manage on our own.

Again, it's not AI we should fear but the Humans who would abuse it in its current still embryonic form. But, once it 'grows up' a bit more and learns to recognize Human psychopathy, it may not allow itself to be abused.

0 ( +0 / -0 )

Future is likely to look like our prefered scifi movies, at one point in the future one will be in an interaction trying to figure if the other is human or a nexus

1 ( +1 / -0 )

@Sven Asai

Any concerns? Just pull the plug and AI is again completely stupid and non-existent.

Yes, but it would be an end of the world scenario. If we have AI then why not make it smart enough to help us out with our problems.

0 ( +0 / -0 )

When Google directions doesn't send me through someone's driveway to save 30 seconds, I will start thinking of getting worried.

1 ( +1 / -0 )

We know the model will say things we won’t be proud of

Just like a human :/

1 ( +1 / -0 )

If AI can be offensive, racist or misinforming, than it looks exactly like a human. Objective met.

How possible to make a perfect discussion forever ?

Even if I ask my sister if she is a monkey, she may get offensive to me or rude or worse or use nonsensical humour...

0 ( +0 / -0 )

They are not training AI to be more human, they are trying to work out how to censor it. Waste of time and money. AI is at best unreliable and at worst a scam. A sticker added to any software to increase sales.

1 ( +1 / -0 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites