From the article…suicide bomber robots…WTF???
In Dallas, a robot — ironically, one intended for bomb disposal — carried an explosive to where a gunman was hiding and blew him up.
Well, that robot was controlled by a human.
All I gotta say on this subject is; I’m praying nightly that AI or otherwise doesn’t play out excatly like the movie “Megan”…
If you haven’t yet, Watch it! & tell me IF that was real, we would actually stand a chance!
Don’t tell 'em it already happened and we won
Say what?
Okay…so ai is given a task that it is not able to complete. That qubit component that (could) acts as like a yes/no switch would make a 50/50 yes or no on the decision it is hung up on?
If so, what about this study that was done…
A room had a series of lights on the ceiling. A plant was placed in the corner of the room. The lights were set to randomly come on one at a time. Over the course of the study, it was found that the lights closest to the plant would come on more often.
So the “random machine” could sense the plant or something…can’t remember it exactly.
And I’m pretty baked and xant remember wtf I was saying…lol…like 4 or 5 different thoughts going on up there…lol
At the end of the day, I just picture a bunch of those Boston Dynamics/Hyundai dogs with rifles attached to the top…fuckin ai
https://youtu.be/3drPEV0fmZw
The stuff nightmares are made of…lol
I had a chat with ChatGPT the other day about project based resources to teach myself more code. It was really informative.
The day before that, I asked it about its accuracy, and it told me
"As an AI language model, I rely on the information available to me in my training data and any external sources of information that I have been programmed to access. While I strive to provide accurate information to the best of my abilities, there is always a chance that the information I provide may not be entirely accurate or up-to-date.
It is important to note that I am only a machine and may not be able to discern the accuracy or validity of certain pieces of information. Therefore, it is always a good idea to verify any information that I provide through additional sources or consult with a human expert if the information is critical."
After that I asked for funny inspirational quotes and it gave me a list of 10 of them
The last one was “I didn’t fail the test. I just found 100 ways to do it wrong.” - Benjamin Franklin
I responded that this quote reminded me of the way it described its own accuracy, and it responded with
“Yes, there is definitely a similarity between quote 10 and the way that an AI language model like myself approaches tasks. As an AI, I may not always get everything right on the first try, but through continuous learning and refinement, I strive to improve my accuracy and provide the best possible responses to user requests. Just like with any new technology, there are bound to be some bumps in the road along the way, but we continue to work towards making AI more accurate, reliable, and helpful for people in all kinds of fields and industries”
reading that, and seeing how it took my comparison and reflected on the idea, gave me goosebumps and I had to take an AI break for the night
Yeah like this thread ain’t redacted…seriously the robots already rebelled or something…we won plain and simple. Nothing to fear or even bother thinking about. Don’t ask me how I know this…
it didn’t reflect on the idea, it replied with a choice from a list of programmed responses. there is a huge difference, and the key is reading the quote you put down the first time, the part that said "
". it actually told you itself not to trust the information it gave, yet you still had thoughts that it was using an intellect sometimes. i don’t understand how you made that connection, but if it works for you, it’s still wrong. folks need to stand up and tell others that wrong is wrong, no matter how you dress it up, or we get all sorts of lies being taken as true. i am sorry to say that you got this one wrong, and the thing told you itself not to trust the information it gave and that it was only doing what it was programmed to do.
All hail our robot overlords!
I remember I had to submit my papers on 3.5” floppy disks to be run through a program to check for plagiarism. Same game, just a bigger arms race.
Not really sure what there was to get “wrong”. I got a good list of resources to help me learn code, I read through a list of funny quotes, accurate or not, and I learned a little about how AI works. I didnt start leaving offerings out, or alter my real life activities for fear of the scary monster living in the web servers. People are using them as lawyers. I just had a little fun, and it got a little too real sounding for me
I somehow wish there’s intelligent life somewhere else in space because there’s bugger all down here on earth.
after reading it again it did seem a little harsh and i had no intention of sounding like that, my bad. i wasn’t trying to offend, yet i somehow managed to just straight up call you wrong. i even read it once before leaving it. it was early, sometimes i come across as gruff and don’t catch them all. i’ve been working on it. btw, those rabbit holes are rather interesting to me as well. have a great weekend.
Greetings @sfzombie13,
I don’t want you or anyone else to take this personally, but I don’t think some of you have a clear idea of how the programming underlying AI systems actually work.
It’s the difference between Algorithmic and Heuristic, between flow charts and rules of thumb.
Algorithmic programming does exactly what you described above, a linear program follows a choice path to a “List” of responses and spits one of them out as an “Answer.” If you ask that program the same question over and over, (unless the programmer has intentionally inserted randomness), you will get the same answer. Even when the answer is “wrong” in some context.
Heuristic computer platforms, such as neural networks, which I believe most all AI systems rely on, are weighing machines that “Learn” by evaluating the “Goodness” of their “Answers” and strengthening those neural network pathways that result in positive outcomes. The network “Pathways” that lead to negative outcomes are weakened, attenuated, naturally. A lot like our nervous systems actually. This is grossly simplified, but hopefully helpful in understanding the difference.
So that positive bias results in the AI getting “smarter” at whatever task they are designed for. Here’s the kicker, due to the structure of a neural net, it is difficult or even impossible to know the exact set of intermediate decisions along the pathway that led to a specific “Answer.”
In other words, an AI system can be really good at a task without the programmer even being able to understand how. This is a widely acknowledged “feature” of AI systems.
So, here’s my point, these are insanely complicated dynamic learning systems that literally “edit/improve” their own code by design. And their computer scientist engineers are in the position of putting up weak and arbitrary “Guard rails” to constrain any “Undesirable Pathways” that they can’t really “fix” because they can’t even understand how their creation arrived at them!
I’m not trying to be alarmist in any respect, but each of the essential components of Hard AI are either here or quite near at hand.
This is gonna happen, already IS happening, and it’s a game changer. For better or worse.
With the utmost respect,
-Grouchy
No worries. I didn’t take it as harsh, I just truly didn’t understand the phrasing.
i’ve got a very good concept of what happens and have a friend who teaches the subject at marshall. it has been a while since the explanation at a conference, and i was simplifying, perhaps too much, but all it is and ever will be is a program, at least until we understand what brains really do and how they work. they are getting very close in that regard, but aren’t there yet. it can only be correct if it has seen the thing before, it doesn’t yet anticipate like humans do. you seem to have a thorough understanding of it and probably know the limitations i am failing to describe accurately. it’s the same mechanism used to introduce bias i think. now if we figure out how thoughts are stored in long and short term memory and can manipulate them in humans, then i’ll be scared.
@other_barry i am working on my internet habits though. i love this website and do not need to bring that negativity here. not to mention the path i am on now would benefit greatly from it, and actually requires it.
You all have no idea. Create DOS…computer code is based on YES Good and YES again, Lies. Well high code anyway…DOS took the US govnt 40 years to create, with thousands of coders. Further, my post was probably redacted, but the AIs already got out of hand…shortly after the cold war ended…trust me it was dealt with and computers are docile…and in reality there is no such thing as AI; it is impossible. If you are using the term AI there is no intelligence or “person hood” involved. Computers aren’t little gremlins out to steal your socks.
All code weather Linux, redhat, windows, Mac os is DOS based. Ergo…if you understand DOS you understand all code…simple as that. Code and programming languages are a way of simplifying DOS coding. I’m not just going to come out and tell you I created DOS…yeah I had a little help . Bill what?
Oh yes…
Oh yeah? Then where have all my socks gone?
Bruh, are you sure you know what Unix means?