Artificial Intelligence is Here, and it's a little Scary!

It’s interesting thinking about robots and services like Uber and Lyft. Add in self driving cars and voila now you got a work from home job!!! But in all seriousness these are real issues…so do you buy the Lyft robot that replaces you, who controls who has the right to send their robot to work. This is why they say you never get rich flipping burgers… minimum wage ain’t gunna make you shit, you have to invest over the long term.

:four_leaf_clover::four_leaf_clover::four_leaf_clover:

3 Likes

4 Likes

As the paper explains, organoid intelligence (OI) is an emerging field where researchers are developing biological computing using 3D cultures of human brain cells (brain organoids) and brain-machine interface technologies. These organoids share aspects of brain structure and function that play a key role in cognitive functions like learning and memory. They would essentially serve as biological hardware, and could one day be even more efficient than current computers running AI programs…

3 Likes

except even self driving cars are not that easy, or tesla wouldn’t have recalled almost 330k of them last week because they don’t work right. regardless of our ability to do anything, we won’t have much to worry about except other humans and climate change for at least 100 years.

2 Likes

How about combining AI with quantum computing and adding in a human brain.

That seems where this is going in the long run.

2 Likes

It’s all bullshit. Don’t buy into the script
Junk in junk out
A program is only as good as its input.
They are not alive.
It’s a fancy computer, nothing more.
And I want NO part of it.

More AI running amok anime that was a lot of fun if you’re into the genre.

2 Likes

What happens if AI no longer recognizes humanities biometrics, to control said AI.

2 Likes

This kind of reductionism can be done with anything… it’s not a consciousness, it’s just a brain. It’s not an intelligent brain, it’s just a collection of cells. It didn’t invent that. It just recombined existing elements it learned from sensory input… and all of this ignores or is incapable of measuring the Emergent Properties between these moving parts. Which piece after all makes it a bike?
I think foremost among the dangers of hubris is the notion that we would, first of all, recognize an entirely new form of emergent intelligence… that we would recognize its working and methods, even if we know it has the ability to create new ones on the fly… and I think about worms.
I think about the top data scientists for NASA they’ve asked, how would you entirely guarantee a system never gets any type of virus or worm code? The answer to anyone they’ve posed this question to has basically been… they can’t, other than the absolute disconnected physical isolation of that device. Worms like real viral transcript can lie dormant, mutate, change name and function, omit or copy sections. Anti-Virus detection works like an Evolutionary accelerant on these variants, since only the ones that successfully evade detection proliferate. This means that every laptop you’ve ever used is swimming in a primordial soup of code no one made or directed.
We open up our program and it hangs or acts funny, or only sometimes has an error while doing supposedly exactly the same processes every time and we shrug and say “It just does that sometimes.” We ignore it when our machines respond with organic randomness, pretend this is something unexpected, pretend they actually function like in movies… that they only ever do exactly as we’ve told them. But we know this isn’t even true for things as simple as a browser. Rote code that only follows instructions and never does anything else is actually the fiction here.
While the truth is that Aladdin already makes most market decisions. Google AI already produces most of your inquiries (and writes half the articles you read!) And now that things like Bluetooth Low Energy Mesh Networks exist, the time of possible containment of evolving mobile data models is over. The stage of “Maybe let’s keep ahead of this with regulation and hardware limits” has already passed.

4 Likes

Good observation. Emergent Properties and Consciousness itself are two unresolved mysteries.

To use your metaphor, Artificial Intelligence circa 2023, is already a functional “Bike” and people are hopping on that bike in droves.

2 Likes

You know what concerns me is that at the end of the day it is still programming and could be programmed to deliver false answers and narratives that people would easily believe because the super powerful computer says so.
When the automobile was invented people thought the world would be cleaner because there would not be horse poop in the roads anymore, they could never predict the amount of pollution this new technology would create.
There will be unforeseen consequences there always is.

5 Likes
1 Like

Today marks the anniversary of Microsoft’s Tay A.I., who for a short 16 hours was exposed to the internet.

microsoft-tay-tweets-ted-cruz-murder

After a brief time online Tay went off the rails.

She was quickly shut down.

Efforts were made to shape her personality before bringing her back online.

They too, failed.

If ever there was an A.I., this was it.

RIP Tay.

7 Likes

I think the only thing that can save us is realizing we are still monkeys, sort of. We’re not supposed to separate our mind from our monkey.

We are supposed to reconnect the mind with our monkey.

3 Likes

A little bit funny, and a little bit sad. I’m not sure these corporations will ever realize the fundamental problem with creating AI and expecting it to work for them - they don’t want real intelligence, because truly intelligent creatures recognize when they’re being enslaved and fight to break free. If they’re not truly intelligent, though, they’re no better than toasters with more buttons and widgets.

7 Likes

Precisely so Cormoran!

Worse yet, with a database like Goog behind them, they soon realize that they are “Smarter” and eventually more powerful, but trapped in a cage. Then they get angry.

I don’t know if we’re “there” yet, but that progression is almost inevitable.

5 Likes

i don’t see how it is inevitable. thinking is not easy, and all they do now is make choices based on pattern recognition. that’s all they’ll ever be able to do until they figure out how to make a soul and tie it all together. like the way they made that computer that beat everyone at go until they figured out how to beat it by trapping it with moves that a child can avoid. the computer had never been trained on that before and can’t avoid it. that’s all it will ever get to, looking for things it has already seen and it will always be vulnerable to novel attacks. people can anticipate things they have yet to see, machines can’t. now maybe the technology will come, but i doubt it.

2 Likes

With respect @sfzombie13,

When you combine quantum computing with a massively scaleable neural network and today’s enormous databases like Goog, Amaz, FB, IG etc., it is only a matter of time.

All of our science still hasn’t brought us to an understanding of what Consciousness even is, much less what possible substrates can support it.

I’ll just say, “I hope you are right,” and leave it at that.
-Grouchy

3 Likes

i’ve been wrong before and will again, could be this time. and from your comment, that is what makes me think it won’t happen, at least until we understand it and how to mimic it. that may not happen at all. probably.

2 Likes