Artificial Intelligence is Here, and it's a little Scary!

if you read it again, i was saying that they have the choice to either allow it to push hate or not to push hate. they choose to push hate. that is wrong, period.

is that better? happy thanksgiving to you as well.

i agree with the last part 100% and will also refrain from anything further. as for the first part, read the first part of this comment. wrong is wrong, no matter how you try to justify it. oh, and i wasn’t arguing, just having a discussion. but as it is a little off topic now i will say one last thing: happy thanksgiving! have a great weekend.

must be me and/or something in the air. i am just not having a very good day today it seems despite things going surprisingly well. my apologies for any misunderstanding, i’m not trying to be obstinate, it just seems to be turning out this way. perhaps it’s time to shut down and go to bed.

1 Like

An agency created an AI model who earns up to $11,000 a month because it was tired of influencers ‘who have egos’

1.8k

Sawdah Bhaimiya

Fri, November 24, 2023 at 7:36 AM EST·2 min read

Aitana López is racking up fans on Instagram and Fanvue.The Clueless Agency

  • Aitana López is an AI-generated creation by a Spanish agency that grew tired of booking real models.
  • López can make just over 1,000 euros, or $1,090, per advert and is featured in images on Fanvue.
  • Fanvue’s CEO previously told Insider that AI-generated characters would thrive and become common.

A Spanish modeling agency said it’s created the country’s first AI influencer, who can earn up to 10,000 euros, or $11,000, a month as a model.

…if you are really interested in what’s happening with AI, this article has some new details on the recent events at OpenAI.

https://hotair.com/jazz-shaw/2023/11/24/openai-tried-to-fire-altman-because-he-was-about-to-wake-up-the-monster-n594421

Earlier this week, we discussed the way that the board of directors at OpenAI attempted to fire CEO Sam Altman and how a revolt at the company led to his reinstatement and the removal of the board. It was a remarkable example of employees overriding the will of the governing board and changing the course of the company’s direction. What wasn’t clear at the time was the reason that the board tried to remove the founding brainchild of ChatGPT in the first place. But now more indications of their reasoning have come to the surface. It wasn’t a case of different “visions” for the corporation’s future, but apparently, a fear that Altman was on the verge of doing something that could potentially have catastrophic consequences for humanity. Altman and his team had made a breakthrough with a project known as Q* (pronounced “Q Star”) that would allow the Artificial Intelligence to begin behaving in a way that could “emulate key aspects of the human brain’s functionality.” In other words, they may be close to allowing the AI to “wake up.” (Daily Mail)

Open AI researchers warned the board of directors about a powerful AI breakthrough that could pose threats to humanity, before the firing and rehiring of its CEO Sam Altman.
Several staff researchers sent a letter to the OpenAI board, warning that the progress made on Project Q*, pronounced as Q-star, had the potential to endanger humanity, two sources familiar with Altman’s ouster told Reuters.

[Snip]

Researchers observed that the AI system employed methods akin to human learning, utilizing strategies similar to those the human brain applies in solving complex problems.

Remarkably, the artificially intelligent brain-like system began to exhibit flexible coding, where individual nodes adapted to encode multiple aspects of the maze task at different times. This dynamic trait mirrors the human brain’s versatile information-processing capability.

This is both amazing and rather frightening at the same time. It’s also something almost entirely new. Machines have always run code. The code either works and produces the desired outcome or it doesn’t. But suddenly these developers are witnessing “flexible coding.” The system is doing things it wasn’t specifically programmed to do. The earlier, large library models of AI such as the first versions of ChatGPT would search their databases and see if someone had already published a solution to the maze. If the answer didn’t exist in the library, it wouldn’t be able to deliver it.

Now the bot is “figuring things out.” That certainly sounds a lot like “thinking” to me. Does this cross the line from a very complex machine to an actual intelligent entity? (Albeit a non-human intelligence.) We may not know the answer until the day that one of Altman’s people assigns a new task to ChatGPT and it responds by saying, “That’s stupid. Why don’t you figure it out for yourself?” Or perhaps even worse, “Don’t bother me. I’m working on something else.”

Buckle up, campers. The ride may be getting a bit bumpy from here on out.

4 Likes

Or it could just be someone wanted more $$ and they started as a nonprofit. I see no evidence for this AGI theory.

The tech revealed to the public is usually old and outdated already.
We are not told and shown everything these folks are doing.
There are some things we are just not told or shown for various reasons.

Something else to think about…
Moore’s Law states that the number of transistors on a microchip doubles every two years. The law claims that we can expect the speed and capability of our computers to increase every two years because of this, yet we will pay less for them. Another tenet of Moore’s Law asserts that this growth is exponential.

4 Likes

“Moore’s law” is outdated now. Also why wildly speculate when we have literal evidence and tools right now.

Aliens came and gave us AGI which is why Sam Altman was fired. The aliens. I don’t have evidence but I have a feeling ok so believe me please.

1 Like

4 Likes

Google quote…
Moore’s Law is not dead** . While it’s true that chip densities are no longer doubling every two years (thus, Moore’s Law isn’t happening anymore by its strictest definition), Moore’s Law is still delivering exponential improvements, albeit at a slower pace.

Moore’s Law will be obsolete by 2036 .

2 Likes

AI is only smart if it is held to human standards, we value learning as the equivalent to having knowledge is the know all for wisdom. 8 billion people are well on the way to destroying human existence on this planet, Nature is fighting back, Chaos rules the universe. AI is nothing more than an evolutionary step to everything that has a beginning, good or bad, will have an end.

The Human race has only been around tiny fraction of earths time spinning around the center star, AI has become a money maker with no understanding of it’s own end result. Only people crave Money, they lie, cheat and steal for it with the belief it makes the world go round. AGI is no different than the people who people who wrote the lines of code, every computer quantum or other wise will die long before the people.

5 Likes

Dang. I was hoping to read @Heliosphear 's thoughts on this but he decided to not post… lol
Not pushing you bro, just yanking your chain a little. lol

3 Likes

:rofl:


I love useing AI art programs.

5 Likes

Me too.


Pushing the limits.

3 Likes

Never had much faith in the reporting from this paper… and apparently I’m not the only one. :roll_eyes:

The Daily Mail has been criticised for its unreliability, its printing of sensationalist and inaccurate scare stories about science and medical research,[17][18][19][20] and for instances of plagiarism and copyright infringement.[21][22][23][24] In February 2017, the English Wikipedia banned the use of the Daily Mail as a reliable source.[25][26][27]

If AI does wake up and decides to kill us all, here’s what I’m wondering personally… am I gonna be any more or less dead than if Kim Jong Un wakes up and decides to kill us all, or if someone puts the front of a bus through my living room, or I happen to disturb a very confused copperhead that’s way out of its comfort zone? If they’re all the same, then these are all dangers that we can’t control, that are all just part of the world. AI’s a new one, but no less dangerous or out of our control. It also has the potential to be a useful tool, so just like other useful tools that could destroy us if things go wrong - nuclear power comes to mind - we’ll explore it and use it and do our best to harness it. If it does destroy civilization, but happens to put all those worthless, no-talent Kardashians out of business and in the poor house first, I’m not sure if I’d consider that an uneven trade. :stuck_out_tongue:

4 Likes

Quantum computing and other forms of computing will make this obsolete. This is only for transistors. As algorithms improve, they will run even better on systems that we already have. Coding will become more efficient.

1 Like

Ok, good point.
Transistors aside for a min.

My point was technology is advancing faster and faster everyday and grows exponentially.

google quote…
Since the invention of computers, technology has been advancing rapidly. Every technological innovation opens the door to a myriad of others that will build on it, creating exponential growth.

1 Like

That can be easily evidenced in the fact that last year the AI image generators were shitty and now i can make that video of Tom Cruise as an indigenous woman talking to a member here in his own voice in less than 15 minutes on a shitty PC using about 4 or 5 algorithms and youtube.

Imagine what 10,000 super computers could do.

3 Likes

bolding mine
wtf does that mean? if it isn’t doubling in the set time, i thought it was 18 months but too lazy to look, it’s broke. typical double speak there.

4 Likes

If you wanna say it is broke, I will not argue that point.
Anyway… putting Moore’s law aside.

1 Like

but it’s not, as it is slowing down since we’ve reached the physical limit.

3 Likes

Do we have to turn every thread into a guy with a horn in town square. There’s plenty of actual things to talk about here. Including real progress in the last year. EVERY aspect of AI has become more efficient. You can read the papers which explain these methods and utilize them yourself. It is not conspiracy. It is something you can do yourself.

https://piggyback-color.github.io/ ← oh look an improved way to colorize anything. (not fully released yet)
GitHub - AI-Guru/music-generation-research: A straightforward collection of Music Generation research resources. ← here’s an older music generator now

Stable Video Diffusion is going to be great when it is fully released soon (just in demo version now). Can take any image and make a video clip out of it.

2 Likes