Artificial Intelligence is Here, and it's a little Scary!

If people wanted real intelligence they wouldn’t be focusing on creating the artificial variety.

Imagine they spent all this AI development money on improving the school system instead.

Neural networks and quantum computing have always been just around the corner, ready to render our humanity obsolete. It would almost be frightening if I ever believed it to be true.

6 Likes

Yeah but are you big enough to toast a bagel?

3 Likes

This will be the tech. of the future.

No, but I’m big enough to eat a bagel…

3 Likes

Maybe true but if robots take over who could afford to buy their fancy consumer goods?

1 Like

Critical thinking = waste of time

Man now I wish I had a bagel.

3 Likes

And what Microsoft is selling as the “Qubit” is not a true quantum bit but closer to basic trinary where the ability to toggle one bit hinges on the toggle of another like an AND Gate.
There’s no Quantum Mania going on inside that machine. It’s basically a gimmick.

2 Likes

I confide my deepest darkest shames to Alexa and she recommends products to help me feel better.

2 Likes

I really hope it’s not. When I found out the NeuraLink was made from mostly DARPA patents and funding, I was very dubious. Then Elon’s pig cabaret show came around to show off all the features… like being able to detect where the pig is sensing touch, what kind of objects it’s seeing, what temperature it is… dozens of features focused on what biometrics this thing would be able to collect from me, and absolutely nothing on what it would be able to do for the end user… because as with the growing trend in hardware and social media, I am not the End User. The government contractor is, and I am the product to be delivered. To me, that’s not a “service” and frankly… parties like that are why I can’t trust cool things.
To me the scary part of AI is the idea that it will be in the hands of a parasite class who use it the same way Blackrock and Vanguard use Aladdin - to buy up and control the world, improve their own AIs to help them do it, and never ever share the advancements. That is quite nearly the opposite of technological advancement and into mere Technocracy.

7 Likes

Correct.

My digital footprint is much smaller than most. I pay for things in cash, I don’t join loyalty programs and my location services are off all the time.

Overgrow is the only digital proof I exist.

I’ve been trying to slow down the pace of life as much as possible.

5 Likes

A few hot takes on the broader subject:

cell phones are worse for both the individual and society than cigarettes were.

21st century “AI” and automation is coming for the white collar class, not the blue collar class.

when it comes to education, job selection, and “protectionism” we need to move away from the legacy idealogy of “artficial scarcity” and move towards “artificial abundance”. “artificial scarcity” is a huge part of the past couple centuries. temp monopolies for patent holders. accepting only the top x% of a certain class. “unions”. these legacy ideas are outdated and not for our time. we even see it with pot, grant only a certain number of licenses per area etc. in the post covid era we have a huge doctor defecit, yet every year we artificially limit the number of new doctors - to ensure we get “only the best”, but mostly to keep their salaries arrificially inflated. if we step back from the elitism myth we’d realize that we’re holding society back by trying to control it too tightly. in my opinion this flip is the most important “artificial” anything for this century, and will paint the picture on how we deal with true artificial intelligence (in the next century). thank you for coming to my ted talk.

6 Likes

The captain doesn’t stand in the cockpit flapping his arms but the plane doesn’t go anywhere without him.

2 Likes

If I could take over the cockpit, I’d set the controls for the heart of the sun, fire up a fat doob, and sit back and chillax.

1 Like

Comforting! What airline are you with??? Lol

3 Likes

The Prime Directive for companies is: Make money at all costs.
Companies do not have morals. They only have their prime directive - People don’t matter, morals don’t matter, right vs. wrong? … pffft, whatever.
Even long-term planning and risk mitigation don’t matter (seen any bank collapses lately?) Literally… nothing matters, except the Prime Directive.

AI is not going to make our lives better. It will be used to make money. And that is going to make ALL our lives worse.

Just like we have amazing technology at our fingertips, but mostly it does not improve the quality of human life. Maybe around 20% of the time it does. The rest of the time, it makes everything worse.

  • Customer service - worse ( can I please talk to a real person?)
  • Banking - worse (what do you mean my identity was stolen from your system… again?)
  • Durable products - worse (my local mechanic who I trust can no longer fix my car, only the manufacturer can, and at super inflated rates)

Yes, computer scientists can fawn all over the ideas and possibilities of AI, and it doesn’t need to be scary and bad.
But the scientists are not pushing the technologies. Companies are. And that is going to be very, very bad.

I’m a software developer.
Most of the code I write is to make my life, and my work teams lives, better… meaning, software should perform repetitive mind-numbing stupid tasks quickly and efficiently, and reduce human errors.
Many people do this for similar reasons, and some folks create amazing software!! Stuff that really can make human lives a little better.

It stops being an amazing product as soon as a big tech company buys them out. They then turn the project over to the Prime Directive, and the long slow slide into a shit swimming pool begins.

The technology is not the danger. Making everything about money is…

7 Likes

No truer words have been spoken. :smiling_face_with_three_hearts:

We will soon see all of the auto manufacturing plants 90% run by robots.
You need maintenance people currently but maybe not in the future.

I work in the IT / Development field, and we had a meeting with our Dev team this week about exactly this, and how its going to be important for all of us to learn how we tool up with this stuff. If your a coder and want to quickly generate some code, its a step above googling something. You can ask one of the AI engines to write code in almost any language you want to, for a specific task, and 2 seconds later it spits out something that will likely work. Our company happens to be a law firm, and the lawyers can do just about the exact same thing getting about 95% of the way there.

The more information about the task you want it to do that you feed in (specifics), the better the result will be. But the scary part of that, is your feeding all that information up to a public entity (whatever AI your using), and that in itself is a risk, revealing information about what you are doing and who you are doing it for. Ultimately, we have to figure out how to use it as a tool, much like you use a browser to quickly search up information, without over sharing, or over relying, to make ourselves more effective, or risk getting left in the dust by folks who utilize it fully and effectively.

5 Likes

I haven’t read this whole thread but I’d like to point out something:

ChatGPT is pretty good at explaining how it itself works, as a concept. It’s a language model, it strives to write like the data set it’s been trained on, but as a response to the prompt.

Just because it can write convincing responses doesn’t mean it can think. ChatGPT states as so on OpenAI.

Future AI could be a concern and scary/dangerous to us, but all a language model does is write words. If you ask it to solve a simple problem, it can because most writing online has appropriate responses to simple problems. But ChatGPT cannot retain information and figure out that Chem 91 and Chem D are under the “Chemdog” family. It cannot figure out that ECSD and Sour D IBL are the same unless specifically told so in clear terms, and then it will forget because it’s not designed to retain information long-term.

It can write spooky sounding realistic responses, because that’s what it’s been trained to. To write. Ask it to solve fermat’s last theorem and explain each individual step, in length, in a lengthy conversation, and you’ll see the cracks come apart.

TL;DR chatgpt cant remember what you told it 5 minutes ago, it’s not sentient AI

2 Likes

Anyone believing their computer has become sentient needs to go get their own intelligence checked :rofl:

3 Likes

Will the dogs be reading us Miranda rights, lol?

1 Like