Talent Retention Woes for OpenAI This Week

Hello, everyone, and welcome to TechCrunch’s regular AI newsletter.

This week in AI news, OpenAI experienced another significant departure as co-founder John Schulman left the company to join Anthropic, a competitor. Schulman, who was instrumental in developing ChatGPT, announced his move on X, citing his desire to focus more on AI alignment—the science of ensuring AI behaves as intended—and to engage in more technical work.

However, the timing of Schulman’s exit, coinciding with OpenAI president Greg Brockman taking an extended leave until the end of the year, raises questions about the decision’s opportunism.

On the same day Schulman announced his departure, OpenAI revealed changes to its DevDay event format, opting for a series of developer engagement sessions rather than a one-day conference. A spokesperson indicated that OpenAI would not be announcing a new model during DevDay, hinting that work on a successor to GPT-4o is progressing slowly. Delays in Nvidia’s Blackwell GPUs could further slow this progress.

Is OpenAI facing challenges? Did Schulman foresee difficulties ahead? It’s clear that the outlook at Sam Altman’s OpenAI is not as bright as it was a year ago.

Ed Zitron, a PR expert and tech commentator, recently discussed in his newsletter the numerous hurdles that OpenAI must overcome to maintain its success. His well-researched piece highlights the growing pressure on OpenAI to deliver results.

OpenAI is reportedly on track to lose $5 billion this year. To manage rising costs related to staffing (with AI researchers being particularly expensive), model training, and large-scale deployment, the company will need to secure a substantial amount of funding within the next 12 to 24 months. Microsoft, which holds a 49% stake in OpenAI and maintains a close working relationship with the company, is an obvious candidate for providing this funding. However, with Microsoft’s capital expenditures increasing by 75% year-over-year (reaching $19 billion) in anticipation of AI returns that have yet to materialize, it’s uncertain whether they are willing to invest billions more into this risky, long-term endeavor.

It would be surprising if OpenAI, the most prominent AI company globally, failed to secure the necessary funds from some source. However, this financial lifeline might come with less favorable terms and could lead to the much-discussed alteration of OpenAI’s capped-profit structure.

Survival may require OpenAI to deviate further from its original mission, entering uncharted and uncertain territory. Perhaps this shift was too difficult for Schulman (and others) to accept. It’s understandable; with growing skepticism from investors and enterprises, the entire AI industry, not just OpenAI, faces significant challenges.

News Roundup:

  • Apple Intelligence’s Limitations: Apple introduced its Apple Intelligence features with the release of the iOS 18.1 developer beta last month. However, the Writing Tools feature struggles with handling sensitive topics like swearing, drugs, and murder.
  • Google’s Nest Learning Thermostat Update: After nine years, Google has updated the Nest Learning Thermostat. The new Nest Learning Thermostat 4 was announced ahead of the Made by Google 2024 event next week.
  • X’s Chatbot Spreads Misinformation: Grok, X’s AI-powered chatbot, has been spreading false information about Vice President Kamala Harris on the platform formerly known as Twitter. This led five secretaries of state to address an open letter to Elon Musk, CEO of Tesla, SpaceX, and X, claiming the chatbot wrongly suggested Harris isn’t eligible for the 2024 U.S. presidential ballots.
  • YouTuber Sues OpenAI: A YouTube creator is pursuing a class action lawsuit against OpenAI, alleging the company used millions of YouTube video transcripts to train its generative AI models without notifying or compensating the content creators.
  • AI Lobbying Increases: AI-related lobbying at the U.S. federal level is intensifying as the generative AI boom continues and with an upcoming election that could influence future AI regulation. The number of groups lobbying the federal government on AI issues grew from 459 in 2023 to 556 in the first half of 2024.

Research Paper of the Week:

“Open” models like Meta’s Llama family, which offer developers flexibility, can foster innovation but also present risks. While many have licenses with restrictions and built-in safety measures, there’s little to stop bad actors from misusing open models to spread misinformation or create content farms.

However, a team of researchers from Harvard, the nonprofit Center for AI Safety, and other institutions have proposed a “tamper-resistant” method to maintain a model’s “benign capabilities” while preventing undesirable actions. Their experiments showed that this method effectively prevents “attacks” on models, such as tricking them into providing prohibited information, with only a slight reduction in accuracy.

There is a downside, though. The method doesn’t scale well to larger models due to “computational challenges” that require “optimization” to reduce overhead, so it might be some time before we see it in widespread use.

Model of the Week:

A new image-generating model called Flux.1 has recently emerged, challenging established models like Midjourney and OpenAI’s DALL-E 3. Developed by Black Forest Labs, a startup founded by former Stability AI researchers, Flux.1 consists of a family of models, with the most advanced version, Flux.1 Pro, accessible via an API. Two smaller models, Flux.1 Dev and Flux.1 Schnell (German for “fast”), were released on the AI development platform Hugging Face with light commercial usage restrictions. These models reportedly rival Midjourney and DALL-E 3 in image quality and prompt adherence, particularly excelling in inserting text into images—a historically challenging task for image-generating models.

However, Black Forest Labs has not disclosed the data used to train these models, raising concerns about potential copyright risks. The startup has also not detailed how it plans to prevent misuse of Flux.1, adopting a hands-off approach for now—so users should proceed with caution.

Grab Bag:

Generative AI companies increasingly rely on the fair use defense when using copyrighted data to train models without the data owners’ permission. For instance, Suno, an AI music-generating platform, recently argued in court that it can use songs without the knowledge or compensation of the artists or labels.

Nvidia appears to be following a similar strategy, reportedly training a massive video-generating model, codenamed Cosmos, on YouTube and Netflix content. Nvidia’s management believes this approach will withstand legal scrutiny under current U.S. copyright law.

Whether fair use will protect companies like Suno, Nvidia, OpenAI, and Midjourney from legal challenges remains to be seen, and these lawsuits could take years to resolve. It’s possible the generative AI bubble could burst before any legal precedent is established. If not, creators—ranging from artists to musicians to writers—may face a future where anything they publish publicly is fair game for AI model training.

Latest articles