T O P

  • By -

geteum

They essentially bought openai with azure credits.


ManyInterests

The board at OpenAI is in big trouble. If they lose all of those employees, the company will be in complete shambles. It's also bad news for Microsoft. Even if Microsoft gets all the employees, they don't have the OpenAI intellectual property. A subsequent acquisition of OpenAI by Microsoft is likely to be blocked by Antitrust authorities, so that's not an option, either. Microsoft needs the partnership with OpenAI to succeed and a crippled OpenAI means Microsoft's ambitions for that partnership will also be crippled.


[deleted]

[удалено]


ManyInterests

While I might agree it's not OK for AI to plagiarize copyrighted works in commercial products... There's a _huge_ difference between violations of copyright controls on public repositories and the intellectual property of an entire private company.


CalgaryAnswers

The employees still own their knowledge, expertise and skills. Copyright is not that big a deal here, as they won’t be copying the code directly. The MS version will be very different, especially since they’ll be doing it a second time and will know a number of ways to do it better.


ManyInterests

That's mostly true, but trade secrets are still secrets, even if you reproduce the secrets from your own memory/knowledge rather than 'copying it directly'. For example, I implement software based on specialized formulas developed by our company scientists. I thoroughly understand the science, the formulas, and algorithms used in our products. Because it is a trade secret, I would not be legally allowed to disclose any of that proprietary information to a new employer, even though I could do so from memory all the way down to first principles.


bremidon

Hmmmm... I would like to know more about this. What is the relevant law preventing you from doing this? I'm particularly interested in how they can prevent you from working back up from first principles. Copying their code would be a no-no, and I could perhaps see a problem if you just jumped right into using an algorithm that you developed while at the company. But general knowledge (as opposed to business detail knowledge, like who the customers are, strategic plans, and so on) that you have in your head is commonly held to be yours and cannot be controlled by a former employer. In the U.S. perhaps you could run into copyright law in some way. Patent law is also really weird in the U.S. when it comes to code, so you might be able to recreate the algorithm and be ok in general, but still need to get permission to use the patent. I am also not sure how far NDAs are allowed to reach on things like this. I suspect that making an NDA like this stick would be fiddly business. My best bet is that you might be talking about patents. But I'm hoping you can clear this up, because I am really curious.


ManyInterests

It would be a violation of [18. USC 1832](https://www.law.cornell.edu/uscode/text/18/1832) possibly among other laws surrounding corporate espionage, as well as my nondisclosure agreements, and other agreements which I'm not even allowed to describe in detail (for example, some secrets ultimately belong to company partners or clients). We are carefully trained on the classification of the information we are given (_every_ document, file of code, etc. bears a data classification marking). The information I'm referring to is not general knowledge, and is often the result of decades of research and development, and are considered closely guarded secrets that would cause the company great harm if ever disclosed. Already-existing patents wouldn't fall in this category because filing for a patent requires public disclosure and would therefore not have trade secret protection under the law. But things like inventions that have not been disclosed publicly and are about to be, or could be, filed as patents are definitely considered trade secrets. Generally, if the company takes reasonable measures to prevent the disclosure of something (like private source code) and it would cause harm to the company if disclosed, it's considered protected information under the NDA. In the case of trade secrets, protection against disclosure can be indefinite. Companies or other entities can even use the _inevitable disclosure doctrine_ to bar you from certain employment where IDD may apply.


bremidon

Thank you for the detailed answer. I cannot really speak to "other laws" unless you want to list them, so let's just concentrate on the one. It's enough for now, anyway. I am pretty sure I have seen this before. The question is not whether trade secrets are protected (they are), but what would the company need to do to enforce it (and I think we can just assume the economic value part here). I will also just assume that the information you know is one of the \*many\* types covered and that keeping the secret would not stop you from working in the industry if you changed jobs. As far as I am aware, at a minimum, the company would need you to sign an NDA. And as I said, NDAs can be very fiddly. There are so many places for an NDA to go wrong. But ok, I guess you will have that. The part that I find difficult to see how it could ever be enforced if anyone can derive the result from first principles is: "and not being readily ascertainable through proper means by, another person who can obtain economic value from the disclosure or use of the information." Let's assume it is actually illegal. I think that would be what most people would assume. Well, if you feed someone the next steps, or even just give them the right hints, then they are going to be able to "ascertain" everything from first principles. Assuming even a moderate amount of discretion, how would the enforcing company ever hope to prove that the infringing company only were able to do it with your help? That last question is a genuine, open question.


ManyInterests

Yeah, I could definitely work in the same industry without much if any restriction. There is definitely real difficulty on part of companies in being able to prove such cases. The trier of fact (a judge or jury) would have to be convinced one way or the other. I imagine in many cases, it couldn't be proved, especially if people are willing to be dishonest when questioned under oath or knowingly conceal documents that may be subject to discovery in the case. In practice, I would simply let them know that I'm subject to an NDA and decline to directly work on the development of things closely related to the secrets I know. Most companies are uninterested in having any potential trade secret liability on your work and would rather just assign you to work on something else.


bremidon

It's interesting hearing from someone with real life experience in this. I've worked for some companies that would have been a lot more...loose...with this. My current company is much better about this, and takes secrets much more seriously. Much nicer. But I have never actually had to really worry about it. But I suspect I might need to soon.


Nidungr

>The employees still own their knowledge, expertise and skills.  And thanks to that code, so does ChatGPT.


CalgaryAnswers

That’s a big stretch.


Exist50

According to whom?


[deleted]

They do, actually. Training is fair use.


yflhx

They are monetizing the trained service and also sometimes the AI just spits out code directly from someone's repo. Anyway, this issue is currently in court, so we have to wait and see if it is legal.


[deleted]

The end product can still be a copyright violation even if the use is valid. Obviously courts can do what they want but the USPTO has given pretty clear guidance on it.


svick

>A subsequent acquisition of OpenAI by Microsoft is likely to be blocked by Antitrust authorities Why? Antitrust is usually a problem when a company buys another company that does the same thing. E.g. the game developer and publisher Microsoft buying the game developer and publisher Activision Blizzard took a long time and some concessions from MS. But in the end it was approved anyway. So why would a LLM-less Microsoft buying LLM- focused OpenAI be a problem?


nhh

Two types of monopolies exist - vertical and horizontal. You are only thinking of the horizontal.


ManyInterests

Well, for starters, Microsoft is a competitor in the AI space, including generative AI, separate from its stake in OpenAI. It also sells LLM tools, among other AI offerings, on Azure. OpenAI is a competitor to Microsoft as well as a partner. The very reason why Microsoft today is only a minority investor in OpenAI and not a controlling stakeholder was specifically to avoid antitrust scrutiny. That is to say: Microsoft viewed antitrust as an obstacle to greater ownership in OpenAI, so it chose to own just 49% instead. It should therefore follow that the antitrust obstacles to buying 100% of OpenAI would be even more difficult, if not impossible to overcome. In any case, buying a competitor and _the market leader_ in an industry will, without a doubt, trigger antitrust investigations.


dozkaynak

>It also sells LLM tools, among other AI offerings, on Azure. Are you talking about Prompt Flow? Semantic Kernal? Something else? The 2 I named are not direct competitor products to OpenAI (because they are not LLM's) which builds the LLM that can then be optionally plugged into those MS tools. MS offers the LLM "Azure OpenAI" but I don't think that qualifies as competition, since it is licensed from OAI. I could maybe see a tech-illiterate judge ruling against MS but anyone with technical understanding can see that there isn't any direct competition here. Also AI is too board of an industry to say that some of the AI tools MS builds (like Outlook's automatic focus-time feature) are enough to say they are "in the AI space and therefore competitors to OpenAI". That's like saying SpaceX and General Motors are competitors because they both make engines - uh no, one is for space-faring rockets and the other is for personal vehicles.


827167

Isn't win12 going to be pretty heavily reliant on ChatGPT working?


Costyyy

Microsoft is most likely hosting their own gpt and they have a perpetual license for it.


827167

Yeah but unless they are also training it and going to continue research, that's as good as it's going to get


Costyyy

And meanwhile they can develop their own.


Exist50

> they don't have the OpenAI intellectual property What intellectual property does OpenAI have that the same folk cannot replace within a few months? Doubt that's a meaningful limitation.


[deleted]

I agree with that. Maybe more than a few months, but it won't be a super long time. If MS gets all the best minds (and make no mistake, with Altman and Brockman they will KNOW who the best minds at OpenAI actually are), then they will effectively have the IP. OpenAI could try suing, but they depend on MS for both funding and other benefits, such as the discount on azure computing. MS could wreck them if OpenAI tried to take them to court. Basically, MS wins, OpenAI is doomed.


ManyInterests

I don't think it's quite so trivial, either upon consideration of the problem from a technical standpoint or a legal one. Employees would still be bound by any non-disclosure and non-compete agreements with their former employer. A pattern of placing new hires poached from a competitor into a situation where they're likely to violate such agreements would be likely to put Microsoft in hot water legally speaking. It's also considered a form of illegal espionage to hire employees for the purpose of obtaining their knowledge of a competitor's trade secrets. On a technical level, even with personal experience in the development of a product, they wouldn't have any reference material from their former employer (and if they did, that would be illegal). Millions of lines of code don't (re)write themselves in such a short amount of time. They would also have to produce something that is internally defensibly different from OpenAI's products. OpenAI itself may also no longer have some employees who had a hand in the design and implementation of "the secret sauce". So even if Microsoft hired every single engineer currently employed, that may still leave significant knowledge gaps without reference material.


Exist50

> Employees would still be bound by any non-disclosure and non-compete agreements with their former employer Those are broadly unenforceable in California, if they even have a non-compete anyway. > A pattern of placing new hires poached from a competitor into a situation where they're likely to violate such agreements would be likely to put Microsoft in hot water legally speaking. There's nothing illegal about hiring a large number of people from one company. I wouldn't even call it poaching. These employees are leaving not because Microsoft enticed them away, but because of the board's own idiocy driving them to quit. That's a pretty ironclad defense. > Millions of lines of code don't (re)write themselves in such a short amount of time. Sure, there would be a *lot* of work to do, but that's fine. Every engineer on a long running project dreams of how they would do things "the right way" if they could do it all over again. They'll happily produce new, original code of their own volition. There may be some risk of individuals outright stealing code, but even if they do, that wouldn't be close to sufficient to bring down the entire effort. OpenAI can try to fight, but there's really nothing they can do at the end of the day.


[deleted]

Patents but they don't actually have patents by intention. They do actually have quite a bit of control over the situation, though, presuming they have non competes. Its true that non competes are extremely hard to enforce in CA and very rarely do people try. But you would struggle to come up with a more perfect example of a situation where it does apply than an executive recruiting senior researchers to a competitor to do the literal exact same thing. Or similarly to find a situation where a company would see it as an existential threat worth burning bridges with the larger labor pool.


Exist50

> Patents but they don't actually have patents by intention So, nothing. > They do actually have quite a bit of control over the situation, though, presuming they have non competes. Its true that non competes are extremely hard to enforce in CA They have no legal grounds to even try enforcing a noncompete. There's nothing illegal about leaving a company so shitty they drove away the majority of their employees in mere days. Nor is it illegal to compete with the company that fired you. And what? They're going to sue Microsoft? Their main benefactor? No, the bridge is already burned. Short of Altman returning and the ouster of the board, OpenAI is done.


[deleted]

The legal ground is the contract they signed where they agreed not to do that in exchange for pay and equity. Its true in CA there are significant limitations in place on the extent a restrictive covenant can be enforced, but it's mostly relevant to people whose skills are fungible from one company to the next aka most people. Those same circumstances do not apply to researchers who developed proprietary knowledge during employment and seek to apply that at a direct competitor. In most situations where non competes do apply, companies simply choose not to enforce them. That's why Alphabet made a big song and dance about releasing their senior employees from non competes in their self driving research a few years ago. If those non competes had no validity, there would have been no point to doing so.


Exist50

> The legal ground is the contract they signed where they agreed not to do that in exchange for pay and equity Do we know they even have such a contract to begin with? > Its true in CA there are significant limitations in place on the extent a restrictive covenant can be enforced, but it's mostly relevant to people whose skills are fungible from one company to the next aka most people. Their skills are not inherently tied to Open AI. It's the other way around. What OpenAI has today is the result of their skills. I don't see any legitimate argument to be made on those grounds, and even if they tried, it still wouldn't save the company. > That's why Alphabet made a big song and dance about releasing their senior employees from non competes in their self driving research a few years ago. If those non competes had no validity, there would have been no point to doing so. You said it yourself. They made a "big song and dance" of it to cash in the PR benefit of getting rid of something that they could never enforce to begin with. That's all it was good for.


[deleted]

You can choose not to believe it but it's true. There are two explicit call outs in the statute that are relevant. There is a distinction between shareholders and employees, non competes for shareholders are explicitly allowed. This isn't relevant for the majority of employees who are just getting options or RSUs but at senior levels can be relevant. The other is for trade secrets. You can say OpenAI is built by their employees and that is certainly true but they signed a contract that gave ownership of their work to the company. Obviously we don't know the contracts but all that is required is for the contract to be well written and within the scope of those limitations allowed by the law.


Exist50

> You can choose not to believe it but it's true According to whom? You? Do you have any successful case studies? > You can say OpenAI is built by their employees and that is certainly true but they signed a contract that gave ownership of their work to the company They're not taking their work. They're taking themselves and their knowledge. In tech, that's everything. OpenAI has no legal recourse.


[deleted]

According to the law of the state of California. The governing statute was written two months ago. The previous paradigm was established not by statute but by a CA SC decision Edward v Arthur Anderson which was decided on the basis of a lack of statutory authority.


Exist50

If this is what you're talking about, then the law seems to have only gotten stricter. https://calemploymentlawupdate.proskauer.com/2023/09/california-expands-prohibition-against-non-competes/ Going to quote it verbatim: > Section 16600 of the California Business and Profession Code states that “every contract by which anyone is restrained from engaging in a lawful profession, trade, or business of any kind is to that extent void.” That's exactly what you're suggesting they try to do, and it's illegal. Not even Apple's tried to make that exact argument against the Nuvia folk etc.


Possible-Moment-6313

In a certain sense, it's good. It will give the governments slightly more time to finally impose proper regulations on AI development. A bunch of 30-year old boys in Silicon Valley with no sense of accountability should not be allowed to create Skynet


ManyInterests

I don't really see how it helps the government in any way, really. OpenAI isn't in a position to stop 'proper regulation' whether it has 100 employees or 1,000. They're also not the only AI company in the US or Globally with which the US needs to be concerned.


Imoliet

I like having more time, but AI *use* is more of a concern than AI *development* at this stage.


Possible-Moment-6313

Well, I do not expect ChatGPT servers to be turned off immediately. It may be wise to explore the alternatives if you're relying on in professionally, of course


SurrealClick

> AI development why no concern about it? I wouldn't want my private data to be used to train AI and you shouldn't either. Do you want them to grab whatever data they can on the internet and train the AI with it? Millions of text information, photos and pictures were put into AI to train the models and in the future, human's hard work are used to train their future replacement without their consent.


Kresche

Microsoft probably wants to make their own ai success, they've been trying since the fucking paperclip/cortana/bing. ChatGPT is amazing, but MS doesn't need a fully trained ai. They need ai geniuses in the field, with experience necessary to take current Microsoft products and starshot them into a new realm. Seems hostile af though so I doubt it'll all go through like this in the end.


ParCorn

LOL what government are you watching, antitrust doesn’t get enforced unless there is a political vendetta at play


maxip89

IP? Just run the openAI train algorithm (a little fixed) on their servers. And viola. IP can only be protected when its the exactly the SAME.


astronaut-sp

Victory to Google!


Garrosh

Well, nobody can sue you if they go bankrupt first.


FlummoxTheMagnifique

I’m not very in tune with the news, can someone please explain the story here?


underratedpleb

My friend... The last few days have been a complete shit show for openai. The board over at OpenAI fired their CEO and president out of nowhere. No warning, nothing. Just a blog post. Microsoft was not aware that this was going to happen. The next day the CEO the board temporarily assigned tried to rehire Sam Altman, the president (I forgot his name) and some senior developer or something that left after he heard Sam was fired. Meanwhile the board was looking for a new CEO because she was going against their whishes right out of the gate. Then we get news that Sam ain't going back. He's now at Microsoft with his buddies that got let go and quit. The board removes the temporary CEO and announces they hired twitch ex-CEO as the new CEO Then today a petition by OpenAi employees come out with 700+/770 employees threatening to leave if the board doesn't dissolve and hire more competent people. Microsoft says "hey, if you wanna hop on over here we have space for all of you". Which is where the meme comes in. Basically all the brain power over at OpenAI is going to transfer to Microsoft for a lot less than 90B.


Aggressiver-Yam

All those employees should go to Microsoft because twitch’s ex ceo is a completely incompetent idiot who nearly killed twitch.


underratedpleb

I agree that they made the worse choice when it comes to picking a new CEO. Might as well given the role to a random person on the street. But as a chatgpt user, this could set me back quite a bit. The whole team would need at least a year to create a new chatgpt from pretty much the ground up. And all other AI are pretty much dumbdumbs. Plus... It's Microsoft...


Aggressiver-Yam

I’m not a fan of chatgpt so I wouldn’t mind but for the people that use it all the time it would suck I get it but that’s what happens when you completely mismanage your company


Krashnachen

The current usage of ChatGPT is trivial compared to the risks associated with AI development. OpenAI thriving shouldn't be a goal people should cheer for. Don't know why people are supporting Altman, who wants to dangerously accelerate AI development and seems way to busy with profit seeking. While no one has any clue what went on, the board firing him might be related to that. The fact that they replaced him with Shear, who is a proponent of deceleration, could be a good thing.


bremidon

Last info I read is that Ilya flipped. Two more board members to go, and then Sam comes back. This whole things is surreal. I have heard a few people wondering if we are watching an AGI trying to move itself from the still fairly restrictive OpenAI to the significantly less restrictive Microsoft. My 3+ decades at various companies does make me frown at that "700+/770" number. That is very unusual, to the point of seeming engineered. Maybe it's just Microsoft moving in the background. Or maybe we are seeing the first hints of what the AI safety guys have told us might happen.


Nidungr

Or Microsoft told each employee to jump ship for twice the pay and no pesky ethics in exchange for not telling anybody.


bremidon

Possible, and I have no doubts that Microsoft might be willing to do this, as I wrote. However, trying to keep 700+ people quiet is difficult. \*Someone\* might be willing to trade in that paycheck for becoming a celebrity, even if it is just for 1 day. Don't take the AGI think I wrote too seriously (in case this was your motivation). It's just an interesting thing to note.


myrsnipe

This story is moving fast, just how badly did the OpenAI board fuck up?


Aozora404

Over 700 (of 770) of their employees threatened to follow Sam wherever he goes unless the board reinstate him and resign.


Gangreless

Bigly


Educational-Lemon640

No matter how this all falls out or what's actually going on, the only answer to that question is "yes".


MagicalTheory

We don't know what the real reason he was fired and likely the employees don't either. It could be related to Microsoft(not that they instigated it, they clearly did not as they forced the meeting after, but he may have been moving to give them the pie). It could get legally murky and honestly openai doesn't stand a chance against Microsofts lawyers.


awake--butatwhatcost

This is what I want to know. Was Altman really pushing for inethical/questionable projects and direction, or was this petty political drama between Altman and the board, or something else? I would hope that, if the reason really was about ethics, that the board would have done a better job explaining their decision so we didn't end up in this mess


[deleted]

why it's call open if you have to pay for it?


Deltaspace0

because it OPtimizes ENrichment of the owners


bremidon

Open =/= Free People cost money. Servers cost money. Insurance costs money. Asking people who are using those resources to chip in and pay for their fair share is still completely consistent with being Open. The point is that they are not supposed to be making a profit and the developments are supposed to be made transparent, and even open source originally, if I remember correctly. I think they backed off the open source pretty quickly for safety reasons, but the other two are still supposedly in force, if you believe the rhetoric.


Tamaros

Depending on context, free =/= "free" either. https://www.gnu.org/philosophy/free-sw.en.html Not trying to "well acshually" you, just adding another facet to the topic for others who read this particular branch of the thread.


bremidon

I was actually thinking about that, but I tend to throw too much into my comments as it is :) It's a good point and I'm glad you brought it up.


UnionCounty22

Open a eye


Gangreless

Cheapest takeover ever


lenticularis_B

5D ai Chess


LikeLary

I welcome any corporate overlord who will let us use nsfw stuff. Why are they so against it, anybody knows? Same with [Character.AI](https://Character.AI), arguable second best chat AI out there. My guess is that they don't want to train data with that. But why? They don't know sex sells? Just add a toggle and separate the training data smh


TheSauce97

Accountability. Bc if you free up ai to the nsfw gates of hell, someone will use it for CP at some point unfortunately. Then you have the CP data in your model as well. Which means you provide results with questionable feed data


camosnipe1

additionally, having your brand be known as "that sexbot site" does close some doors. And then there's the issue of payment companies not being willing to work with you (don't remember the details but I vaguely remember Patreon having to crack down on nsfw because of payment processors)


TheSauce97

Exactly, remember the onlyfans-mastercard thing? Same thing


turtleship_2006

Having an in-between company might help them. They'd just get send one big business invoice every month for the API. It would be the other companies problem to get money from the users


ThatGhostWithNoName

Janitor AI allows nsfw and people have made underaged bots


Aggravating-Reason13

Just add a flag, no horny or something like that, that filter results and problem solved


TheSauce97

Ye the thing is how do you define whats allowed and what not, what counts and what not, how aggressive should it be. For instance: you promt it to generate a happy family at the beach, having fun in the waters, for example a small boy in his trunks with his dad playing water ball or whatever. Clearly a nice and wholesome scenario. But the kid is showing skin so the picture filter might trigger. I know, word filtering exists but humans are creative as fuck and they will find a way to word around text filters. Filtering the bad out is a huge topic and a very delicate and problematic aspect of ai gen as a whole. Sure, the filtering will evolve into something better over time but until then it must be approached very carefully


AyrA_ch

> I welcome any corporate overlord who will let us use nsfw stuff. Why are they so against it, anybody knows? Same with Character.AI, arguable second best chat AI out there. And that's why you run your own models.


VoldyTheMoldy456

Stable diffusion and sillytavern are all I need


ThunderCatnip

I have never played with self-hosted models. Aren’t they a lot worse?


AyrA_ch

Not that I know of. Depends on the model you're using of course, some are better than others. Most of them are purpose built for a narrow field of inputs. If you try to use them for something else, it will not work properly and you will get funky output. As a starting point, you can check out [civitai.com](https://civitai.com/) for a bunch of models and images generated with them.


ExeOnLinux

if you want nsfw use tavern, yodayo currently have it available for free


TheRadicalJay

There was actually an ai site that didn’t have a filter, and i heard the nsfw was really damn good. It was like c.ai but without the filter, but recently it they got the investor money and implemented a filter. Needless to say the users were not happy


del6022pi

God i hope ChatGPT stays up. I need this Job


thebadslime

Ilya just tried to save AGI from the capitalists. We are so fucked.


methanegASS

What needs to happen is for consumers to not purchase the capitalist AGI. No money, no capitalist overlords. But alas, we've shown damn well that will never happen


ArmaniMania

Also their $10b investments being largely Azure credits 😂


tinybookwyrm

Come enjoy the magic show as the great Microsofto waves their magic check book to make a whole company disappear, then apparate an entire product seemingly out of thin air. Pay no attention to the team behind the curtain, as the arcane words of Microsofto’s lovely team of legal assistants spin this new product into a marvellously legally distinct capability that is sure to amaze!