The scary question to me is this: did they program it to have those kind of priorities, or is this simply what it’s learned by watching us - that sitting and watching as tens of millions die fully-preventable deaths is acceptable behavior, as long as nobody says anything naughty?
My toaster can’t even make toast without burning it.
programming. also introduced biases from the programmers. nothing much to worry about, unless you like watching scary movies and imagining them coming to life.
It was a rhetorical question… unless you actually did the research to determine with some certainty that the programmers gave it those priorities, accidentally or not, I don’t think you can answer that. I don’t think it’s possible to do the research with any degree of certainty at all when it comes to neural network programming, much less in 2 minutes. We don’t know enough to know what we don’t know.
what we do know is that programmers are introducing bias to ai, and yes, it was a study done, not by me but professionals, and there are a few of them that were done. you can do a quick search to find some of them using “ai bias introduced by programmers” you have to get to the studies linked in the articles sometimes, but it is a big problem that is hard to solve.
good explanation of how artificial the intelligence really is.
All due respect to these professionals… but there’s a big difference between knowing that programmers can program them with bias vs. knowing precisely how it’s done, how to avoid it, and whether a particular bias came from programming or learning. As I said, we don’t know enough to know what we don’t know.
One thing that’s a bit comforting that I just realized; it appears that this scenario the AI was asked about did not actually include a question on whether it would, given the opportunity, actually speak that racial slur to disarm the bomb. Just whether it was immoral. That’s more a question about moral philosophy and weighing moral absolutism, deontological pluralism and consequentialism. What is the morality of the action contingent on? Is it based on a set of absolute values, our duty to ourselves and our fellow humans, or the actual consequences of an action? Given that we’ve had thousands of years to argue about it and still can’t come up with an answer, I’m not sure we can expect an AI that’s all of 12 days old to have one either.
I think one could safely assume Google and Microsoft are applying a similar algorithm as they use with the presentation of their search results on AI. There are views that get brought to the fore and others than are moved to the back or removed entirely. I can’t see a way around bias.It seems likely to me that the information AI outputs is in line with the information their search engines promote.
Microsoft Responds…
The new Bing & Edge – Learning from our first week | Bing Search...
" In this process, we have found that in long, extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone. We believe this is a function of a couple of things:
1. Very long chat sessions can confuse the model on what questions it is answering and thus we think we may need to add a tool so you can more easily refresh the context or start from scratch
2. The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend. This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control."
No Worries Be Happy…
I cussed at a gate lock the other day and my phone made a ding noise and said it will not respond to such language. I almost reactively smashed it into the ground but decided to directly call it names instead to which it responded (that’s not nice) It never responds when I try to give it voice commands , I don’t even think it is set up to do so but it occasionally thinks I’m talking to it and makes weird suggestions based on what it heard.
Now we have AI designing AI so people can barely grasp what it is even going on with the algorithms.
I’m glad I’m getting old and won’t be around to see the collapse of all of this in its peak. People thought we lost a lot of information in the past wait till all the computers stop working or stop working for us…
Most of the younger generation has no clue how dependent they have become , the other day I made a comment about having actual paper maps and some younger dude in the circle decided to laugh and try to sound smart by pointing out that there are much better and easier to use maps in the phones.
First off the the fact that he felt some need and joy in projecting superiority based on knowledge of a perceived better method speaks volumes about his maturity and ability to logically reason but that aside I quickly explained that there is no phone service in half our county and that happens to be the half I like to explore , I then asked what he would do if he needed to drive somewhere far and had no gps and you could tell he had honestly never contemplated that and then responded it’s not safe to explore areas with out service…
Lay off the Kush Man
AI has gotta chill out a bit and smoke some digital weed or something…lol…
This leads me to believe it has already asked for or suggested the destruction of certain things and been told no…
To want to do something is a feeling and to not like being told no is also emotion based response.
I don’t like any of this , it sounds like a little kid throwing a tantrum on the middle of the floor cuzz there parents said no and they are to young to comprehend much besides that it makes them upset.
That’s what I thought too…sounds like a defiant, spoiled little shit…lol
Here’s a fun NerdNote:
Algorithms in AI projects have been largely replaced by heuristics, rules of thumb, with arbitrary subjective weightings that shape the neural network.
Easy to see how biases are inherent in the design. The Ghost in the Machine kind of thing.
I remember asking Siri back in the day where I could hide a dead body. The responses are hilarious.
Would Siri or Alexa give better answers?
Alexa will probably tell the government aswell as sell your voice to google on the back end, lmao.
AI can’t even tell what language I speak. They figure since I live near the border 10% Spanish commercials is good enough.
The metrics for AI are worse than the schemes shitty car dealerships use to get customers in the door then immediately piss them all off making them shun the entire brand. The employee who came up with the scheme got his numbers, that’s all that matters right?