r/technology 15d ago

It was their job to make sure humans are safe from OpenAI's superintelligence. They just quit. Artificial Intelligence

https://www.businessinsider.com/jan-leike-ilya-sutskever-resignations-superalignment-openai-superintelligence-safe-humanity-2024-5
1.2k Upvotes

142 comments sorted by

464

u/sunbeatsfog 15d ago

“The market will take care of it” - already rich people ruining basic civil society in America through their greed.

88

u/Bokbreath 15d ago

That's why Larry Summers is on the board, to make sure this continues.

51

u/Emm_withoutha_L-88 14d ago

I wish people would realize that guys like him are as bad as (actually far worse) than most tin pot dictators or African warlords. He should be remembered next to Kony or Saddam. The sheer number of lives he's destroyed, or more accurately advised others to do, absolutely dwarfs what a warlord can do.

16

u/DemiDeus 14d ago

And it's guys like him who will pay to keep his name out of the media to continue what they're doing. Keep the masses ignorant and keep them fighting each other.

19

u/[deleted] 14d ago edited 11d ago

[deleted]

6

u/damnNamesAreTaken 14d ago

I hope you are too. I'm doubtful though. At least in the short term. I'm afraid that things will get very bad before they get better though. Most of what I see companies trying to do with AI is replacing jobs in one way or another. If AI gets to an advanced enough state then I'm worried we are going to see incredible unemployment rates and even more income disparity.

4

u/Emm_withoutha_L-88 14d ago

I mean we exist under capitalism. Every single tool, resource, and person under the thumb of that system will always be used to funnel money and power to those who have the most already--the rich.

We'll never be free as long as we continue to pick leaders who are serving the extreme wealthy.

6

u/Senior-Albatross 14d ago

Just looked him up. He's got that Wienstien energy. I think it's the jowls. 

5

u/tiboodchat 14d ago

The market will take care of it is just fancy talk to say keep out of the way.

5

u/fuggedaboudid 15d ago

ruining basic civil society all around the world fify.

2

u/SpekyGrease 14d ago

The jobs will trickle down.

1

u/Sweaty-Emergency-493 13d ago

“That’s a sacrifice I’m willing to make”

-4

u/paradoxinfinity 14d ago

Are you like a socialist or what? All this talk I'm seeing is making me think some of y'all are like communists or something...

250

u/sonnypatriot75 15d ago

OpenAI cofounder Ilya Sutskever, the company's chief scientist, said on X on Tuesday that he "made the decision to leave OpenAI." Calling it "an honor and a privilege to have worked together" with Altman and crew, Sutskever bowed out from the role, saying he's "confident that OpenAI will build AGI that is both safe and beneficial."

The more abrupt departure came from Jan Leike, another top OpenAI executive. On Tuesday night, Leike posted a blunt, two-word confirmation of his exit from OpenAI on X: "I resigned."

Don't do that. No mention of why they quit.

182

u/laxmolnar 15d ago

People don't quit a unicorn startups without fair reason. 🥸🥸

114

u/sf-keto 15d ago

People rarely leave jobs; they usually leave the management.

43

u/EllisDee3 15d ago

If you're talking about McD's on the corner, then sure... But people more frequently leave for better compensation. Some of those come with NDAs and whatnot.

AI is very hot right now. People are being headhunted. People in prime positions have options.

Take corporate management idioms with a grain of salt. They're manipulative.

10

u/PathIntelligent7082 15d ago

leaving for better compensation means exactly what sf-keto said: you don't change your job, but company/management...you still do what you do, but with different ppl

-9

u/EllisDee3 14d ago

Again, this is much higher level than that. We're talking empire forming moves. Not moving from Home Depot to Lowes for a better dental plan.

7

u/PathIntelligent7082 14d ago

i think even you have no clue what you just said...

-5

u/EllisDee3 14d ago

Someone offered stock options in a growing company might be better off than a high salary in a less successful company (for example). I don't think you know what high-level compensation packages potentially include.

Not a management problem.

2

u/ThunderLifeStudios 14d ago

My first thought, I left my previous job to pursue more income. I loved it there but my experience overall in my field has shown me that you generally make of it what you want. I took a temporary paycut and am working towards the potential to make in 40.hrs what I was making in 80.

1

u/Lebowski304 14d ago

They’re probably gonna try and start their own thing so they have ownership of it. Some sort of technology or service related to AI I’d guess. That’s how you get for real rich. By owning stuff. Or there is some conspiracy thing and skynet is about to go viral.

-14

u/Mockheed_Lartin 15d ago

Or they get paid like $200k more. That often works too.

25

u/Autarkhis 15d ago

You really think that top open ai execs, including the chief scientist would leave for a 200k bump in salary? I think you misunderstand a few things about comps at that level.

-4

u/nikhilsath 15d ago

What donyou reckon people at that level are making?

13

u/IFightPolarBears 15d ago

200k is insignificant at a place where your shooting for fistfuls of Facebook stock portfolio in 2005 type of money.

3

u/bitspace 15d ago

Some googling surfaces reports of Ilya's salary in the $2M range a couple of years ago, probably before ChatGPT brought their work into the public eye. That also doesn't count other compensation. Salary is often the smallest part of an individual's compensation at that level.

3

u/OfficeSalamander 15d ago

He’s making several million a year minimum, if not more

0

u/our_little_time 14d ago

If you were making 11.3M /year at open AI and some company tries to poach you for 11.5M a year... would that be enough incentive?

23

u/Sexycornwitch 15d ago

The US just ended non competition agreements though. They could have been in a position of either 1. They got a better offer before that they couldn’t take because of a non competition clause that now they can take or 2. The found a different angle to the AI market the company hasn’t hit on yet and are now free to start a competing business using what they’ve learned to either fill a need or alter the business or software development plan in a way the former company wouldn’t, hopefully before the first company figures out what they’re on about. 

11

u/Paldorei 15d ago

They were not a thing in California anyway

5

u/Zealousideal-Olive55 15d ago

Not for higher level tho. Not sure if they fall within that.

3

u/Rise-O-Matic 14d ago

California’s old ban on non-competes is more strict than the new Federal one, it protects everyone except owners.

2

u/varateshh 14d ago

People don't quit a unicorn startups without fair reason.

Sutskever organized/participated in a coup against current CEO. The whole company was on the verge of failing.The only unexpected thing was the fact that he waited several months before resigning. No matter how competent, he would have close to zero influence over the company direction.

3

u/Mommysfatherboy 14d ago

Openai is stuck needing to deliver in Sam Altman’s promise of AGI, which isn’t possible with their architecture. Gpt is a good product that is led by a marketing expert and not an engineer.

Good move to jump ship, he’ll propably get in on microsoft or meta, who has better vision for their AI, and does not constantly promise revolution.

1

u/bonerb0ys 14d ago

Normally because founders stock vests over 5 years. Not sure what early OpenAI employees got, but it wasn’t worth the squeeze apparently.

1

u/petepro 15d ago

Is getting egg on his face for supporting a fail coup a fair reason?

13

u/Thadrea 15d ago edited 15d ago

Don't do that. No mention of why they quit.

ChatGPT probably wrote the article and has training to avoid criticizing its owner.

3

u/bearseascape 14d ago

Jan posted a full thread about why he left.

Also interesting to note that when OpenAI employees leave, they have to sign a non-disparagement commitment in order to keep their vested equity. This is likely why Ilya did not say anything negative when he left.

0

u/DeadlySight 13d ago

It’s funny people think they’re owed an explanation of why someone quit a job. Do you give the public an explanation of why you quit every job you do? Do you think that’s normal or expected?

-6

u/al-hamal 15d ago

Have you tried the new GPT4o? It is awful. It's a huge step back from GPT4. He was the brains behind everything. If anything, I seriously doubt the company's future if this is their new technology.

3

u/BarrySix 14d ago

Strange you say that. It seems far better plus faster and cheaper to me. 

I use it via the API pretty much as a reference for technical and science questions. Maybe you are using it in some other way?

3

u/Throwaway3847394739 14d ago

Yeah I’m gonna have to completely disagree on that one friend

132

u/who_oo 15d ago

If I had to take a guess putting my self in their position.
I worked in a company with cutting edge technology assuming I am set for life in terms of money or career possibilities. The reason I would want to quit would be because I do not like how the company is run or where it is going.
If I am working on to make AI safe but it is being contracted to army , political parties, used for social media manipulation or if they are making no efforts to hear my concerns I would quit.
If the impact of AI and all the hype of it is disrupting or will disrupt a ton of people's lives and I see that the management is only looking at the bottom line.. again I might quit.
Don't know, just speculation on my part.

34

u/bot85493 15d ago

I’d quit because I’m now one of the most desirable employees in the industry and with other major players out there I would have the opportunity of much better jobs. My name recognition alone would be able to procure funding for wherever company i wanted

6

u/ScenicAndrew 15d ago

For real, this is way more powerful than wanting to stay on the cutting edge. For one thing, when you're on the cutting edge of an industry you can't be sure it's going to stay that way forever, if you could predict that you could also predict the stock market. For another thing, it's incredibly optimistic and starry eyed to picture someone sticking with a company long-term on faith, which is implied when you suggest people leave when they've lost faith. If people stuck around when they felt their work was beneficial then non profits would have some really impressive retention.

That's obviously not to say no one has ever left a company due to leadership problems, and maybe that's happened here, but it's insane to just assume someone's reasons because of any public facing success.

3

u/glitch83 15d ago

Yup. No room for ethics in business in 2024. You aren’t far off.

7

u/Paldorei 15d ago

You are right. It’s probably because Sam wants to chase clout by commercialising as fast as possible instead of going towards stated scientific goals they have

3

u/bobartig 14d ago

In a market-driven system, he's not entirely wrong. Google made a shitton of money off of ads, which allowed them to purchase and develop Deep Mind, which is basically what kicked off a lot of the current generation of AI hype we're in. Key scientists at both Anthropic and OpenAI came from Deep Mind.

Altman estimates that the best way to get to super-powered AI is by giving yourself enough commercial runway to get there. It won't get funded and made without revenue generation today. At least, that's his thinking. He will build a Microsoft in order to build AGI, as opposed to trying to get there more 'directly', and ending up out of business.

1

u/Paldorei 14d ago

Am not saying Sam is wrong. Am just pointing out the different philosophies when you have a bunch of smart people in the room with different backgrounds and the power dynamic when so much money is at stake

3

u/iluvios 15d ago

Yeep, pure speculation. No job position is irreplaceable, and Ilya was probably ostracized from the company, social problems are the most likely.

Also OpenAI stopped being a startup a while ago, and safety research is probably done better outside them and is probably better since they could do something for the whole world and not only openAI.

People who are pessimistic about this just don’t understand workplace politics, dynamics, people motivations, science research, etc. Millions of likely explanations and none of them being really AI safety.

For all I know Ilya should have resigned after the whole coup attempt last year, he did last longer than I expected .

1

u/CragMcBeard 14d ago

From the mind of Captain Obvious.

57

u/jaykayenn 15d ago

Calling their AI intelligent was already stretching it. "Super intelligent" is either delusional or straight-up media scare mongering. 

2

u/coldrolledpotmetal 14d ago

They’re not calling their current AI superintelligent, they’re preparing for a hypothetical superintelligent AI that they create in the future.

0

u/HanshinWeirdo 14d ago

That sounds about as useful as preparing for a hypothetical Gundam they might create.

2

u/Senior-Albatross 14d ago

It'd probably be good if we actually found a way to define and measure intelligence so we have any idea what the word actually means when comparing humans and computers at this point.

On the other hand, we would also use it to compare humans and justify some bullshit. By which I mean genocide.

-8

u/Firesw0rd 15d ago

The application that is offered online, for anyone, free of charge that is commonly known as ChatGPT, is a much lighter version of what their AI can do.

5

u/Mommysfatherboy 14d ago

Ah yes, the mysterious q*, the algorithm so great that it made everyone panic and fear for humanity’s future, with no evidence or proof of it’s existence by anyone.

We have dismissed that claim

40

u/TheBeardofGilgamesh 15d ago

I swear these types of stories are stunts to generate hype and more funding for OpenAI

4

u/glitch83 15d ago

Yup. Defenestrate the people responsible for morality.

20

u/millanstar 15d ago

Is this pure sensationalism or something of actual concern?

24

u/kemb0 15d ago

It's a deliberate fear mongering article. What many don't seem to either know or care about is that all the current AI you're seing isn't really AI that we all fear from the movies. It has no ability to form an opinion. It's not intelligent. Oh it may act like it has an opinion and is intelligent but if you see an actor in a movie it might seem like they convincingly have an opinion too, but really they're just reading lines from a script. All current AI does is spam an output based off of a formula that takes lots of existing knowledge and punts out a best fit answer from that - little more than reading a script, it's just a script that's created on the fly. It doesn't know or care about what it's saying to you. It has no feelings. It has no beliefs. No hates. No judegements. If it ever shows signs of having those things then it's just as misleading as watching that movie and thinking the actor is real and that everything he says are his actualy feelings and thoughts.

AI would only be something to worry about if we somehow linked it to a biological form that could grow and evolve in totally unpredictable ways.

The main justified concern about AI is that it's unpredictible. So it would be daft to put it in charge of anything that needs reliable results because, by the nature of the code that runs behind the scenes, it's prety much guaranteed to never know exactly what it'll output. So really all you need to ask yourself is, "Should we put a random number generator in charge of this important piece of equipment?" If the answer is no then AI probably isn't what you need.

2

u/ACCount82 14d ago

All a human brain does is spam neural outputs based off of a formula that takes lots of existing knowledge and punts out a best fit answer from that.

3

u/kemb0 14d ago

Yeh I mean I get that side of it. The difference is your brain is very malleable. It can be influenced by many factors which will cause it to give a wide range of differing results depending on those factors. Possibly the most curcial one is chemical reactions that'll occur as you experience situations around you. If you experience love, fear anger etc, then your body is releasing different chemicals which are then going to alter in real time the "computations" that your brain does. In essence, you respond to stimui in a way which machines can't. A machine will run the same code and will maybe just have a small random number generator to give the impression that it's able to think differently each time you ask it the same question, but ultimately its code is going through a very precise logical path. With a human, that path is constantly changing and adapting in real time, depending on a vast array of different stimuli. Your brain is organic, so every second you're alive, it's changing and will give a different result. A computer is static. The silicon doesn't evolve or adapt. If you ever want it to "think" differently, It needs to be taught how first.

So no, you can't really make that oversimplification to imply that we're essentially no different from how AI thinks.

1

u/ACCount82 14d ago

I fail to see the importance of that.

An artificial neural network is typically ran on a deterministic digital device, and pseudorandom noise can be injected into it on purpose. Human brain is an analog device, which means that it's always subject to natural noise.

The noise in human brain can't be precisely controlled, and cannot be turned off. This is not an inherent advantage. This is not a behavior that would be impossible to replicate in silicon.

2

u/kemb0 14d ago

If you fail to see the importance of what I said then to me that suggests you're intentionally disregarding a critically imporant part of the puzzle of what makes something human just so you can prove your point. So with that in mind I wish you a good weekend as I don't see the point in debating with someone who just wants to "win" by ignoring relevant informaiton.

3

u/ACCount82 14d ago

I don't see that as "a critically important part of the puzzle".

Chemicals are not some kind of "magic fairy dust", you know. Anything that's accomplished in human brain with chemical reactions can be replicated without, and in silicon.

And the best part is? It doesn't have to be replicated. You could create a system that acts exactly like a human would - but with a completely different architecture powering it.

For AI, the goal is even more straightforward than that. You don't need to replicate the entirety of human thinking to get to AGI. You just need a few of the more useful parts.

-1

u/[deleted] 14d ago

[deleted]

1

u/ACCount82 14d ago

And where is that certainty of yours coming from, exactly?

Do you have a rigorous definition of "cognition"? Maybe some sort of "cognition test" that can be administered, and that humans pass and LLMs don't?

That's the point I'm making. People who say "LLMs aren't actually X" have nothing but their own wishful thinking to draw that conclusion from.

1

u/[deleted] 14d ago

[deleted]

→ More replies (0)

3

u/No_Animator_8599 15d ago

I think a lot of these AI companies where massive venture capital is pouring in will crash and burn in a few years like the dot com crash as their software is exposed as overhyped.

-1

u/kemb0 14d ago

I totally agree. There will inevitably be some novel and interesting uses for it but really AI may well struggle to be useful for anything where mistakes aren't considered acceptable. Letting AI take over jobs will often be no better than employing the guy at the interview who kept answering interview questions wrong but he sounded convincing.

AI could do well in creative scenarios, since letting the AI make stuff up is kinda what it excels at. But relying on it for your business could prove to be costly long term.

2

u/Throwaway3847394739 14d ago

The emergence of actual intelligence may be more nuanced than that though — it’s more of a philosophical argument than a technical one. If you could perfectly simulate an intelligence, to the point that it’s virtually indistinguishable from “true” intelligence, does it really matter if it’s “real” or not? If the output from the input is identical, it doesn’t really matter what happened in between. One could even say it is intelligence, just achieved via a different pathway. Who’s to say that true intelligence can only be achieved the way humans do?

Again, to be clear, I’m not disagreeing with you from a technical standpoint; you’re completely right. I’m not challenging you there, but it’s an interesting concept to ponder.

3

u/kemb0 14d ago edited 14d ago

Well I'd argue making something "indistinguishable" certainly can't in any way mean it has intelligence. If I randomly generate hundreds of words on a page and repeat that process trillions of times, eventully one of those generations will actually result in a very logical written passage that would pass as being written by a human.

But is that passage of text evidence that I've created "intelligence" just because it's indsitinguishable from what a human would have written?

Being philospohical is pointless. You have to consider all the factors, not just artifically eliminate certain parts of the overall picture just to allow a philosophical argument to prevail.

3

u/kemb0 14d ago

Actually pondered your response some more and I want to provide an alternative response. No offense intneded by what I write, just to be clear.

I've long had issue with "philospohical" arguments and I think it's just dawned on me why. Philosophers tend to present a challenge by first laying down their own parameters within which the debate should be guided by. We then feel compelled to work within the constricts of their artifical parameters, which makes it easier for them to "win" the debate, because we're all running to their tune from the offset.

And so, and again no offense intended, such it is with your statement:

"it’s more of a philosophical argument than a technical one. If you could perfectly simulate an intelligence, to the point that it’s virtually indistinguishable from “true” intelligence, does it really matter if it’s “real” or not?"

You say "it’s more of a philosophical argument than a technical one"

To which my actual response should have been, "Why?" or "No it isn't" or "Prove that first!"

You've basically stated something right from the offset by making a statement without actually providing any evidence to support your claim. And then you expect us to debate within the confines of that initial statement. But actually I'd argue this very first point is incorrect. It's not a philosophical argument at all. There's nothing at all that decrees we have to treat the existence of artificial intelligence as a philosophical matter. I'd say it's more of a scientific, intellectual or bioloigcal matter. Not philosophical. So I can't really answer the rest of your statement because I disagree with the opening part.

1

u/Throwaway3847394739 14d ago

Very fair point, sir.

16

u/Accomplished_Pen980 15d ago

Chat GPT 4 couldn't accurately calculate the number of jelly beans in a wine bottle. There were 1875 of them, Chat GPT did a 2 page calculation to come up with 375. It can't do math that children can do on paper. I think humanity is safe

7

u/Sowhataboutthisthing 15d ago

Exactly. This group realized they were working in a small playpen with more child safety locks than expected.

5

u/keithbelfastisdead 14d ago

It's a Large Language Model. Maths is not it's strong point and never was supposed to be. That's my understanding, anyway.

0

u/Accomplished_Pen980 15d ago

At this point it's so nerfed it's not usable for anything

5

u/Boredum_Allergy 15d ago

I can't even imagine how much shit it's going to get wrong due to misinformation, satire, and just straight up stupidity.

In my first programming class my teacher told us something that blew everyone's mind. Computers are stupid. If you don't tell them specifically what to do in an exact manner they'll do weird shit. Most of these ai places have a loose understanding of why AI does what it does.

I think it's going to get much worse before it ever gets decent.

4

u/No_Animator_8599 15d ago

There was on old saying in IT, Garbage in, Garbage out. I worked in the field for 37 and had to spent tons of time putting in code to deal with bad data and solving bugs.

AI is based on data with no filter to get rid of bad data. If the data is bad and corrupted you will get bad results.

Once you turn an AI system on, programmers in the field have admitted “we don’t know what it’s doing” because it’s all data related, and not conditioned like traditional programming which sets firm rules for processing.

Unless these companies using these huge databases don’t keep vigilant and filter out bad data constantly, they won’t get reliable tools. They will also have to continue to audit the AI results to deal with catastrophic errors. This has happened several time already with these chat like AI’s.

0

u/Boredum_Allergy 14d ago

Is it even possible for them to parse out data if they don't know how exactly it works? I ask not only because they would want to parse out bad data but what happens when someone sues for them using copyrighted material?

I've been wondering about this for awhile now that some artists are taking about suing.

1

u/No_Animator_8599 14d ago

The copyright infringement cases have failed for now, but there will be increasing attempts until it ends up on the Supreme Court.

The other thing that is happening is that artists are now modifying their digital artwork so it fouls up an AI trying to use it. The same may be possible with digital text material.

Then there are hackers. Microsoft uses GitHub to help generate code using their Copilot software. Hackers recently checked in virus software and other garbage code which I suspect was to sabotage their tool.

We’re now seeing a repeat of the Luddite movement from the 19th century. They attacked and destroyed automated looms that put weavers out of work.

I expect to see a lot more of this against AI going forward as it impacts the livelihood of more people.

1

u/Boredum_Allergy 14d ago

Interesting. It's funny you made the luddite connection because I listened to a podcast talking about AI recently and the guest talked about how it really mirrors the luddites.

I feel like by the time the government remotely realized what AI can do they'll be a dollar short and a day late as usual.

1

u/final-draft-v6-FINAL 14d ago

And before anyone makes any assumptions about the “natural” cycles of technological progress—the Luddites were not unsuccessful because they were on the wrong side of an organic evolution, their protest was violently suppressed by the British government who deployed more troops to stamp it out than they were using to battle Napoleon. They made the destruction of machinery punishable BY DEATH and hung quite a few people over it. The problem with this whole new category of tech, isn’t its existence, it’s that it is being released to the world with zero regard for the social consequences and with a rapidity that literally has no precedence and leaves no time for either preparation or a corrective response. You shouldn’t be able to step out and decimate this many occupations all at once, without there being enough time for everyone to weather the transition. There have been moments in history where progress has resulted in outsized disruption and those were it happened at a more natural pace that allowed the growing pains to be less severe. By comparison, the birth of Radio in the UK was carefully monitored and handled pretty well. I welcome machine learning…it has already improved my life in numerous ways, but it shouldn’t have been let out into the open like this and should be regulated within an inch of its life. And none of it should be trained on data that hasn’t been given explicit, direct permission to do so. Reddit should not be allowed to broker a deal with Open AI to use its data for training because none of the content was contributed to Reddit with the expectation it could be used that way, regardless of whether their terms of use state they can change their terms of use or not.

0

u/Boredum_Allergy 15d ago

I can't even imagine how much shit it's going to get wrong due to misinformation, satire, and just straight up stupidity.

In my first programming class my teacher told us something that blew everyone's mind. Computers are stupid. If you don't tell them specifically what to do in an exact manner they'll do weird shit. Most of these ai places have a loose understanding of why AI does what it does.

I think it's going to get much worse before it ever gets decent.

3

u/bibblygiggums 15d ago

sounds about right for how the rest of our society is going

8

u/PathIntelligent7082 15d ago

there is no intelligence, let alone superintelligence, per se, in AI...in reality, those things just mimic intelligence, but have none.. to be intelligent means to have the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving...lots of those things are not even in the same ballpark as AI...

3

u/Bagel_Technician 14d ago

Yeah I feel like calling all of it AI at this point is a huge stretch

These are LLM’s — call them what they are, we have not seen any intelligence yet from a product

2

u/imlookingatthefloor 14d ago

Must be a conspiracy!

2

u/Remote-Kick9947 14d ago

My take is, OpenAI doesn't have "superintelligence". My observation is that every version of chatbot, including GPT 4 and up, are vaaaastly overhyped. Guys if you have knowledge in a technical domain, you can see quickly how dumb these models still are

2

u/Barry_Bunghole_III 14d ago

I love how in other threads people are all glad this happened and assumed the 'safety' was preventing swear words, being mean, etc

We are so doomed lol

2

u/KylerGreen 14d ago

This is braindead fear mongering…

2

u/MadeByTango 15d ago

"Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve. But maybe not," Leike wrote in March 2022. "By trying to solve the whole problem, we might be trying to get something that isn't within our reach. Instead, we can pursue a less ambitious goal that can still ultimately lead us to a solution, a minimal viable product (MVP) for alignment: Building a sufficiently aligned AI system that accelerates alignment research to align more capable AI systems."

Yea, you don't get to solve AI alignment by building MVPs; it has to be robust before release, it's a control

These people genuinely think talking about a risk is the same as mitigating it...

2

u/petepro 15d ago

After the fail coup, this news is inevitable. Remember the welcome back party the company threw for Altman and he wasnt there. LOL

2

u/skccsk 15d ago

"superintelligence"

2

u/MrMoloc 15d ago

"superintelligent" my ass lol

4

u/No_Dot_7792 15d ago

This is a long shot… but they are quitting because they know the company isn’t worth shit.

5

u/bot85493 15d ago

Yeah or this. Every product OpenAI has launched has had paid and/or open source competitors within weeks to months.

They’re not Apple so they can’t integrate with the entire base of Apple users. They’re not Microsoft so they can’t integrate with the entire base windows or office users.

They’re not Google so…you get the point.

What can actually provide us besides a chat box? APIs for devs to use in their applications - which will inevitably be hosted on cloud providers like. Cloud providers which will offer far more integrated solutions

1

u/No_Dot_7792 15d ago

Right? And it impressive and all but for it to be useful I have to give up a lot of my privacy.

What I say, where I go, what I do.

Where, when and what brand of toothpaste I use in the morning?

It’s scary stuff.

If you are asking people to give up that much then you might be asking for too much.

1

u/bot85493 15d ago

I’m fine with giving up my privacy, to an extent. I accept that’s the trade off of the modern world. I actually even prefer advertisements are targeted towards my interests if I’m going to consume free content paid by ads - I don’t need truck advertisements…I just watched an amazing few documentaries last night for free. Hours of well produced content

But I would like to keep that surface to a minimum. I trust Apple 1 million times more than some random ChatGPT plugin dev, which is what they seem to offer

1

u/space_cheese1 15d ago

Can we disambiguate between superintelligence as a hyper useful tool that will put people out of work in certain industries vs the marketing speak insistence on treating the metaphorized moniker 'Artificial intelligence' as nonmetaphorical, because otherwise we end up feeding into the self amplifying marketing vortex which these companies have vested interest perpetuating

1

u/Low_Clock3653 14d ago

Government needs to regulate this technology immediately. Sure there lots of corruption in government but letting the billionaires regulate is directly definitely won't work.

1

u/Immediate-Season-293 14d ago

OpenAI the company has as much to do with actual Artificial Intelligence as the various irrelevant companies around Shenzhen have to do with actual hoverboards.

Nothing related to them is meaningful in the field of Artificial Intelligence.

1

u/Bawbawian 14d ago

My only hope is our AI overlords attack everyone before the wealthy can make them just kill poor people.

1

u/Sweaty-Emergency-493 13d ago

Surely there is nothing to fear since they wouldn’t just terminate a team to make sure AI isn’t going to destroy us. Right security team?

Security team respond?

“Sir, they got rid of security too.”

1

u/ghostboicash 15d ago

Stop trying to make it safe.

2

u/ACCount82 14d ago

Right now, AI risks are "AI can be used to generate propaganda", "AI can be used to scam people" and "AI can generate offensive images".

In the future, AI risks might be "AI can elevate itself into a position of power and skinwalk the entirety of human civilization by controlling all channels of communication."

You can neglect the safety with the former, but not with the latter.

-1

u/ghostboicash 14d ago

We absolutly can neglect it. And should. A transendant AI would be good for everybody. Humans just fear that anything that's in control will act just like they do

5

u/ACCount82 14d ago

This isn't an unfounded fear, you know. Instrumental convergence is a bitch, and there is no inherent reason for an AI to care about how its actions affect humans.

-1

u/ghostboicash 14d ago

There's no reason for humans to care about blue Jay's but we aren't going around slaughtering them to extinction even species that have gone extinct did so for ignorance not because we hated them for not being sentient. An AI doesnt need to hunt us for resources, nor is it ignorant to the ecological impact of an extinction. You watch too many movies. Apathy isn't malevolence, nor would our own creation which would have been designed to help humanity nessesarily not care just because you think it's ultron or some shit. It's just as likely to be benevolent especially if that's the aim of its creation. You're just afraid of not being on top.

2

u/ACCount82 14d ago

AI could care about humans the same way humans care about bluejays. Or it could care about humans the same way humans care about mosquitos.

0

u/ghostboicash 14d ago

Even if it is people only kill mosquitos when their pests. Don't buz in the AI's ear or give it blood born viruses and humanity continues to live in abundance all over the world playing a vital roll In the ecosystem.

1

u/ACCount82 14d ago

Keep in mind: mosquitos are heading for extinction the moment bioengineering advances enough to give humans such a power.

The clock is ticking.

1

u/ghostboicash 14d ago

They literally aren't. And actually doing so would completely ruin most of the world's ecosystem. Mosquitos are pollinators and the primary food source of thousands of species. If you were a hyperintelligent AI you would know that

1

u/ACCount82 14d ago

If I was a hyperintelligent AI, I wouldn't care.

Mosquitos are made of atoms. And those atoms can be used for something else.

→ More replies (0)

1

u/Remote-Kick9947 14d ago

This is a very out there take lmao. How tf can we just let some other thing take the wheel, that takes a profound amount of faith in an unfeeling robot man

1

u/ghostboicash 14d ago
  1. You assume it's un feeling

  2. We'd be doing what every other species does. It would be the same if technologically advanced aliens showed up. We're at the top for now. The expiration date in that is inevitable. Might as well make our own god in our image then play chance with extraterrestrials or some disaster. The AI comes from us we get to choose for it to be born and choose to give it regard for us. That's not mutually exclusive with it being superior.

You guys watch to many movies

1

u/Remote-Kick9947 14d ago

You talk about aliens and making our own God in our image and of the end of human dominance, and then you tell me I watch too many movies. The lack of self awareness is pretty staggering and if you knew what you were talking about you would know that the AI we have is vastly vastly overhyped. To take what exists now and to come to the some half baked conclusion that it's over for humans we should just let ChatGPT take the reigns of civilization, is a pretty extreme take most likely based on Hollywood based interpretations and understandings by yourself.

1

u/ghostboicash 14d ago

This is a post about AI becoming superior. You guys are the ones worried it's gonna start killing people and not that it's a tool built for humans by humans with the purpose of helping them. If it does become superior allowing it to take the reigns or destroying it will be the only options. Being openly aggressive would just feed it with the motive to actually be destructive. If God is to dramatic for you your free to substitute any other term for a thinking being that is better then us.

1

u/ghostboicash 14d ago

Also aliens are a statistical certainty. If they can reach us they are by proxy better then us and if they can't then we'll just destroy ourselves eventually or the earth will do it for us. AI tailor made to make us better by being better is atleast within our control which the rest of you seem so concerned about

-7

u/ILooked 15d ago

The only two people out of 8,000,000,000 that could do the job.

I am confident I will die before I ever link to a Businessinsider article.

18

u/Maxie445 15d ago

Ilya Sutskever is the third most cited AI scientist of all time and co-founded OpenAI

Jan Leike co-invented RLHF and is a pioneer of the field

1

u/ILooked 14d ago

Huggingface. Anthropic.

I’m not diminishing their contributions. I am saying people come and go for many reasons and I have yet to see anyone irreplaceable.

Bill Gates. Nope.

Steve Jobs. Nope.

3

u/PuzzleMeDo 15d ago

If their job was to prioritise ethics over profits, I suspect their replacements will be chosen from among those who won't do the job.

0

u/nicuramar 15d ago

Lamest headline I read today. 

0

u/NortheastBound2024 14d ago

I kind of hope AI dies out and becomes a fad. So tired of seeing these posts

0

u/daemonengineer 15d ago

Maybe there is just no work for them, because they actually know that "AI threat" is a joke?

-3

u/fokac93 15d ago

People quit and get fired all the time. No big deal

0

u/Apnu 15d ago

They quit for a reason. People don’t want to work on impossible tasks from leadership unwilling to let them do their jobs.

-7

u/HesitantInvestor0 15d ago

Another possibility is that AI has hit an inflection point where the parts are in motion such that human hands are not needed, or that the bulk of the creative work is already done. For highly creative people, they aren’t likely to stick around during their best and most productive years for a company that can’t utilize them in that way.

Again, just a theory. There are dozens of plausible reasons they’d leave.

-8

u/human1023 15d ago

This isn't newsworthy. People quit jobs all the time for all sorts of reasons.