Digimasters Shorts

Digimasters Shorts - BBC and Dow Jones sue Perplexity AI for content theft, German study exposes massive AI carbon footprint, Meta offers $100M to steal OpenAI talent

Adam Nagus, Carly Wilson Season 2 Episode 122

Send us a text

Welcome to Digimasters Shorts, your quick-hit source for the latest and most intriguing developments in the digital world. Join hosts Adam Nagus and Carly Wilson as they unpack key stories on AI ethics and legal battles, environmental impacts of large language models, cybersecurity challenges, innovative AI features, and the fierce competition among tech giants for AI talent. Stay informed with concise insights and expert analysis on the trends shaping the future of technology.

Support the show

Don't forget to checkout our larger sister podcast - The Digimasters Podcast here. Which has many expert guests discussing AI, Career Mentoring, Fractional Careers, Digital and much much more.


Adam N2:

Welcome to Digimasters Shorts, we are your hosts Adam Nagus

Carly W:

and Carly Wilson delivering the latest scoop from the digital realm. The BBC has threatened legal action against Perplexity AI over claims that its AI models were trained using BBC content without permission. A letter was sent to Perplexity's C.E.O, Aravind Srinivas, demanding an end to scraping BBC content and deletion of any copies held. The BBC is seeking financial compensation and has warned of an injunction if its demands are not met. This move follows industry-wide concerns about AI firms using copyrighted work without authorization. BBC Director General Tim Davie has called for stronger intellectual property protections to safeguard national content. Rupert Murdoch's Dow Jones has also filed a lawsuit against Perplexity for alleged illegal copying. Perplexity denies the claims, calling them manipulative and asserting it does not build or train foundation models itself. The BBC argues Perplexity's tool competes directly with its services by bypassing user access to official content. In response, the U.K government is reviewing copyright laws related to AI, with a promise that creative industries will not be harmed. Several major publishers have entered licensing agreements with AI companies, highlighting the growing tension over content use in AI development.

Adam N2:

A new study from Germany’s Hochschule München University of Applied Sciences reveals the significant environmental impact of large language models, or L.L.Ms, used in AI tools like Chat G.P.T. Researchers analyzed 14 different L.L.Ms by asking each 1,000 benchmark questions and measuring the carbon emissions generated from their token outputs. Models with advanced reasoning capabilities generated up to 50 times more CO2 emissions than simpler text-only models. The study found that while larger, more complex models tend to be more accurate, they also consume substantially more energy and produce higher emissions. For example, Deep Cogito 70B, the most accurate tested model with 84.9% accuracy, emitted three times the CO2 of similar-sized but less complex models. Deepseek’s R1 70B reasoning model produced emissions equivalent to a 15-kilometer car trip per quiz, yet its accuracy was lower at 78.9%. Meanwhile, smaller models like Alibaba’s Qwen 7B were far more energy-efficient but less accurate. The findings highlight a clear trade-off between AI accuracy and sustainability. Researchers urge users to adopt energy-efficient practices by limiting the use of high-capacity models when possible and requesting concise responses to reduce emissions. Ultimately, this study stresses the growing need for more environmentally friendly AI technologies as their use becomes widespread. Cybersecurity in artificial intelligence is evolving to address complex software vulnerabilities through automated reasoning and deep code analysis. Traditional benchmarks fall short, often relying on small codebases and simplified tasks that do not reflect the complexity of real-world systems. To bridge this gap, researchers at UC Berkeley developed CyberGym, a comprehensive benchmarking tool with over 1,500 tasks derived from actual vulnerabilities in open-source projects. Each task includes full codebases, executable programs, and vulnerability descriptions, requiring AI agents to generate proof-of-concept exploits. CyberGym introduces four difficulty levels that gradually increase the challenge by adding more contextual information about the vulnerabilities. Testing revealed that current AI agents like OpenHands paired with Claude-3.7-Sonnet can reproduce only a fraction of vulnerabilities, with success rates dropping sharply for longer exploits. Notably, richer inputs improved performance, yet overall effectiveness remained limited, highlighting the difficulty of real-world security tasks. Despite these challenges, AI agents discovered new zero-day vulnerabilities, demonstrating potential for future applications. This research underscores the need for robust evaluation frameworks to better assess and improve A.I's role in cybersecurity. CyberGym sets a new standard for testing AI agents’ capabilities in complex software security environments.

Carly W:

Perplexity AI has launched a new Text-to-Video feature on X, allowing users to generate videos from static images with accompanying audio. The feature quickly gained popularity, with users creating videos of influencers enjoying traditional Indian snacks like samosas and chai. However, the AI refuses certain prompts deemed inappropriate or sensitive, such as requests involving political figures or stereotypical representations. While playful and quirky prompts like animated sponge and starfish characters are accepted, some culturally specific videos are blocked without explanation. Founded in 2022 by Andy Konwinski, Johnny Ho, Denis Yarats, and IIT Madras alumnus Aravind Srinivas, Perplexity AI has been nicknamed the"Google Search Killer." Aravind Srinivas, the C.E.O, previously worked at Google DeepMind, Open A.I, and Google Brain. The company employs around 700 people and is valued at$14 billion, backed by investors including Jeff Bezos and Nvidia. Despite its rapid growth, Perplexity has faced criticism for allegedly scraping proprietary content without proper attribution, leading to legal challenges from Forbes and The New York Post. Additionally, Wired revealed in 2024 that the firm bypassed website restrictions meant to prevent unauthorized data mining. Perplexity AI continues to innovate amid scrutiny and industry competition. Meta C.E.O Mark Zuckerberg is accelerating his aggressive hiring spree in artificial intelligence, recently securing key talent from notable startups. Following a$14.3 billion investment in Scale AI to bring onboard founder Alexandr Wang, Meta has now recruited Daniel Gross and former GitHub C.E.O Nat Friedman. Gross leads Safe Superintelligence alongside Ilya Sutskever, who declined Meta's acquisition and recruitment attempts earlier this year. Gross and Friedman will join Meta to work under Wang, while Meta gains a stake in their venture capital firm, NFDG. This move intensifies the fierce competition among tech giants like Meta, Google, and Open A.I in the race to develop artificial general intelligence. Open A.I C.E.O Sam Altman disclosed that Meta has offered signing bonuses up to$100 million to lure talent, yet top Open A.I employees remain loyal. Meanwhile, Open A.I has invested$6.5 billion in hiring and acquisitions, including designer Jony Ive’s startup. Other notable AI talent moves include Google re-acquiring founders of Character.AI and Microsoft recruiting DeepMind co-founder Mustafa Suleyman. Gross brings valuable experience from Apple and Y Combinator, while Friedman has led multiple startups and was GitHub’s C.E.O. Meta promises to reveal more details soon about its expanding superintelligence team and efforts.

Don:

Thank you for listening to today's AI and Tech News podcast summary... Please do leave us a comment and for additional feedback, please email us at podcast@digimasters.co.uk You can now follow us on Instagram and Threads by searching for@DigimastersShorts or Search for Digimasters on Linkedin. Be sure to tune in tomorrow and don't forget to follow or subscribe!

People on this episode