Apple to power AI tasks using own server chips / Mistral raising funds at $6B valuation / TikTok to auto-label more AI content [EN]
Host 3:"Apple's latest venture, Project ACDC, promises to revolutionize the cloud - and here I was, thinking the only thunder we'd see was from guitar riffs, not data centers!"
Host 1:"Ever pondered how AI's creativity might not just match, but eclipse our own? This episode, we're unpacking the latest in AI innovation that's stirring the pot in the tech world. Get ready for an insightful journey into how these advancements challenge our traditional views on creativity. It's not just a ride; it's a deep dive into the future of AI that entrepreneurs and researchers won't want to miss."
Host 1:So, you know how we're always glued to our gadgets, right?
Host 2:Oh, you bet. I mean, my iPad is practically an extension of my arm at this point.
Host 1:Well, the company has some juicy plans for those tech appendages of ours. They're cooking up a plan to power some of their AI features using their own chips in data centers.
Host 2:Text Host 2: Hold up, their own chips? You mean like, the crunchy, salty kind?
Host 1:(laughs) No, not snack chips. I'm talking about the M2 Ultra chip, the same one they use in their top-of-the-line Mac Pro and Mac Studio computers.
Host 2:Ah, gotcha. So, what's the big deal about these chips?
Host 1:Well, these chips are like the brainiacs of the tech world. They'll process advanced AI tasks in cloud-computing servers. So, things like creating images, summarizing long articles, and even drafting lengthy emails.
Host 2:``` So, no more pulling my hair out trying to summarize my favorite web articles, huh? ```
Host 1:Well, not exactly. The simpler tasks like that will still be handled by chips directly in our devices. But the heavy-duty stuff? That's going to the cloud.
Host 2:That's pretty rad. But what about privacy? I mean, Apple's always been like a fortress when it comes to that, right?
Host 1:You're spot on. Apple has always favored on-device processing to keep our data safe. But the good news is, insiders suggest that existing processor components can also ensure our privacy in the cloud.
Host 2:Phew, that's a relief. I mean, I don't want anyone snooping on my...uh...extensive cat video research.
Host 1:(laughs) Of course not. And to keep up with the AI boom driven by technology like OpenAI’s ChatGPT, Apple put the pedal to the metal on its cloud AI plan, which it kick-started in two thousand twenty-one.
Host 2:Pedal to the metal, huh? Sounds like they're really revving up for this.
Host 1:They sure are. In fact, the project, code-named Project ACDC, or Apple Chips in Data Centers, is part of Apple’s AI-focused iOS eighteen launch this fall.
Host 2:Well, I'm all ears to see what they whip up. I mean, as long as it doesn't involve actual potato chips, I'm all in.
Host 1:(laughs) I think we're safe on that front.
Host 3:I read somewhere that scientists are programming AI to predict weather patterns. So, now we've got clouds digitally predicting their own mood swings. What's next? Raindrops diagnosing our vitamin D deficiency?
Host 1:my tech-savvy friend, let's take a detour from our usual chat about your latest drone mishaps and dive into the world of AI startups. You're going to love this.
Host 2:Oh, you know I'm all ears when it comes to artificial intelligence. What's the scoop? And hey, that drone was asking for it!
Host 1:So, there's this French AI startup, Mistral AI. They're currently raising funds and their post-money valuation has skyrocketed to a whopping six billion dollars, up from two billion dollars just last December. That's like going from a bicycle to a rocket ship in no time!
Host 2:Holy smokes! That's some serious growth in a short span. They must be doing something right. Or they've found a magic beanstalk!
Host 1:Absolutely. They're on the verge of securing around six hundred million dollars from their existing investors, General Catalyst and Lightspeed Venture Partners, and a few other potential investors. It's like they're the cool kids at the playground everyone wants to play with!
Host 2:Damn, those are some big names in the investment world. But what's their secret sauce? Do they have a magic wand or something?
Host 1:Well, Mistral is known for developing open-source and proprietary large language models. Think of it as teaching a computer to talk and understand human language. One of their products, Mistral Large, is giving a tough competition to GPT-four, another smarty pants language model.
Host 2:```Ah, the language model wars! I love it. But wait, didn't I hear something about them being Europe's OpenAI rival? Or was that just the AI gossip mill?```
Host 1:You're right on the money. They released their first paid products in February. And guess what? Microsoft signed a sixteen point three million partnership with them the same month to commercialize their models on its Azure AI platform, which is like a playground for AI, and scale up its AI development. It's like they've hit the AI lottery!
Host 2:Holy cow! That's like hitting the AI jackpot. I can't wait to see what they do next. Maybe they'll start making AI that can fly drones better than me!
Host 1:Ha, wouldn't that be something? It's an exciting time in the AI world, isn't it?
Host 2:Hell yeah, it is! And to all our listeners out there, keep your drones safe and stay tuned for more exciting AI news!
Host 3:Jumped from two billion to six billion? My bank account's envy just met its match.
Host 1:you know how we're always diving headfirst into the AI rabbit hole, right? Well, OpenAI is stirring the pot again. They've got this new draft framework called Model Spec.
Host 2:Text Host 2: Oh, you mean the one that's going to dictate how their AI tech behaves in the future? I heard a whisper or two about that.
Host 1:Exactly! Right now, OpenAI models are like the strictest librarians, preventing users from generating explicit content. But, they're thinking about changing that. They're exploring if they can "responsibly" allow their ChatGPT to generate NSFW content.
Host 2:Wait, what? You're telling me the AI could start generating adult content? That's wild! But also, kind of makes sense, right? I mean, it's about understanding "user and societal expectations of model behavior" with NSFW content, as they put it.
Host 1:You hit the nail on the head! They're trying to figure out if they can do it in age-appropriate contexts through the API and ChatGPT. It's like walking a tightrope, but it's an interesting direction for sure.
Host 2:Text Host 2: And they're inviting feedback from the public on this, right? Until May twenty-two, if I'm not mistaken?
Host 1:Yes, you've got it! They're really trying to make this a collaborative effort. But don't worry, this won't impact existing OpenAI models like GPT-Four. They're still following their current usage policies.
Host 2:Well, that's a relief! I mean, I love a good AI-generated story, but I'm not sure I'm ready for it to get too spicy just yet!
Host 1:Haha, I hear you! We'll have to wait and see how this pans out. But for now, let's keep our AI stories PG-rated, shall we?
Host 2:Sounds like a plan! And to our listeners, we'd love to hear your thoughts on this. Do you think AI should generate NSFW content? Drop us a line and let us know!
Host 3:Ah, just what humanity was clamoring for: another AI churning out risqué poetry. Because, clearly, the pinnacle of technological advancement is teaching robots how to flirt badly.
Host 1:my tech guru, you're always on the pulse of the latest tech trends. So, have you heard about TikTok's new dance with AI-generated content?
Host 2:Text Host 2: you know I'm a sucker for tech gossip! Don't keep me in suspense, spill the beans!
Host 1:Alright, brace yourself! TikTok is leveling up. They're going to slap labels on more AI-generated content in their app. They've been doing this for in-app content, but now they're extending it to content whipped up with outside AI tools, like OpenAI.
Host 2:Text Host 2: Holy smokes, that's some futuristic stuff! But why, though? Are they just showing off their tech muscles?
Host 1:Ha, not quite! It's all about keeping it real and transparent. TikTok is using something called Content Credentials. Think of it as a digital birthmark for images, videos, and audio created with generative AI. It's like a nutrition label for digital content!
Host 2:Text Host 2: Ah, I see. So, it's like a digital Sherlock Holmes, tracking the origin and editing history of AI-generated content?
Host 1:Spot on! And here's the kicker, OpenAI's Dall-E adds this watermark to AI images, which TikTok can automatically detect and label when uploaded.
Host 2:Wow, that's some serious tech wizardry! But who's the mastermind behind this Content Credentials thing?
Host 1:Ah, the brains of the operation! It's led by the Coalition for Content Provenance and Authenticity, co-founded by Adobe. The technology is even integrated into Adobe tools like Photoshop and Firefly. And big tech players like Google, Microsoft, OpenAI, and Meta are all on board to support the standard.
Host 2:Damn! That's some heavy-hitting tech news. Thanks for the lowdown!
Host 1:Always happy to keep you in the loop! And to our listeners out there, stay tuned for more exciting tech updates!
Host 3:Digital nutrition labels are next, right? Because my screen time's looking like a high-calorie diet.
Host 1:my dude, you're gonna flip when you hear this. You're always geeking out about artificial intelligence, right?
Host 2:Text Host 2: Oh, absolutely. AI is like the avocado toast of the tech world. It's everywhere!
Host 1:Haha, right on! So, there's this AIPI survey that's been causing quite a buzz. Heard about it?
Host 2:No, I haven't. But you've piqued my interest. What's the scoop?
Host 1:Well, it's about AI companies using public data to train their models. It's like they're reading our digital diaries to teach their robots how to talk. Creepy, isn't it?
Host 2:Oh, I get it. It's like using someone's personal journal to teach a parrot how to talk. That's a bit unsettling, isn't it?
Host 1:Exactly! And it's not just a handful of people. We're talking about sixty percent of the respondents. They're not too keen on AI firms having a free-for-all with public data.
Host 2:Oh, you're spot on! It's like a game of musical chairs, but with companies. So, you've got SoftBank Group Corp., a Japanese investment firm, eyeing up this British semiconductor startup, Graphcore Ltd. They're in a bit of a pickle, but they do some pretty cool stuff with large "intelligence processing units" for AI software processing inside data centers.
Host 2:Exactly! And then, there's Docusign, who's snapping up Lexion, an AI contract-management startup, for a cool one hundred sixty-five million dollars. Lexion's got this nifty NLP platform that turns contract text into structured data. It's like having a personal translator for legal jargon!
Host 2:Ah, right. So, Samsung Medison, from South Korea, is buying Sonio SAS, a French company that specializes in fetal ultrasound AI software, for about ninety-two point four million dollars. It's like they're building a United Nations of AI companies!
Host 1:Hold on. Let's break it down for our listeners. When we say "intelligence processing units", we're talking about the brain behind the AI operations in data centers. It's like having a super-smart assistant that never sleeps. Fascinating, isn't it?
Host 1:Well, that's one way to put it! It's definitely an exciting time in the AI industry. Let's see how these mergers and acquisitions shape the future of AI. And to our listeners, we'll keep you posted on these exciting developments. So, stay tuned!
Host 1:You're spot on. OpenAI and Microsoft have been in the hot seat. Major publishers and authors have accused them of using their content without permission. Even the New York Times sued them last year.
Host 1:You bet! Seventy-five percent believe that if companies are going to use data, they should at least pay the creators. And nearly eighty percent are all for government regulations on the matter.
Host 1:You know, we've been knee-deep in the whole public data use by AI companies thing. But, let's switch gears a bit. There's been a bit of a shuffle in the AI world recently, hasn't there?
Host 1:Not entirely. OpenAI, for instance, argues that training AI models on publicly available internet data is fair use. They say it's backed by long-standing and widely accepted precedents.
Host 1:Absolutely! So, folks, what's your take? Should AI companies be allowed to freely use public data? Or should there be some regulations in place? Chime in and let us know!
Host 2:Whoa! That's like a triple punch to the gut for AI companies. But it's not entirely shocking. I mean, there have been some pretty high-profile lawsuits recently, right?
Host 3:"Ah, another AI invention aimed at 'simplifying' life - because, clearly, pressing a button was too strenuous for the modern human."
Host 2:Well, it's definitely a complex issue. But hey, that's why we have these chats, right? To figure out the best way forward.
Host 2:Well, darn! That's some serious stuff. But I guess it's not all doom and gloom for the AI companies, right?
Host 1:Well, that's one less headache for businesses. And what's this about Samsung Medison and Sonio SAS?
Host 3:AI's life motto: 'I'd agree with you, but then we'd both be wrong, optimistically speaking.'
Host 2:Wow, that's a significant number. But I bet there's more to this story, isn't there?