Newsletter


Highlights:

  • Elon Musk’s AI chatbot Grok continues to draw criticisms with its latest minimally censored image generator.

  • An AI lab at MIT has created the AI Risk Repository to document over 700 risks associated with AI systems.

  • SAG-AFTRA has reached a deal with AI startup Narrativ for audio voice replicas of union members in digital advertising.


Credits to Gabby Jones/Bloomberg via Getty Images

Business

Musk’s ‘fun’ AI image chatbot serves up Nazi Mickey Mouse and Taylor Swift deepfakes (The Guardian)

On Wednesday, Elon Musk debuted the latest edition of AI chatbot Grok with a new image generation tool. Unlike the guardrails on other image generators, Grok is capable of producing violent, copyrighted, or sexually suggestive images. Images like Donald Trump bombing the Twin Towers, Nazi Mickey Mouse, and celebrity women like Taylor Swift and Kamala Harris in lingerie are all things Grok has produced. The only prohibition Grok seems to have is on completely nude images, since non-consensual nudity has long been banned on X. Elon Musk posted about the unregulated images: “Grok is the most fun AI in the world!”

Google's former CEO blames remote work for the company losing its AI edge (QZ)

When asked why startups like OpenAI are leading in AI innovation, former Google CEO Eric Schmidt said “Google decided that work-life balance and going home early and working from home was more important than winning.” He also noted that “the reason startups work is because the people work like hell.” Studies around work from home have been mixed, with some saying it boosts productivity and some saying the opposite. In the last year, Google has adopted a stricter remote work policy, tracking attendance to make sure employees follow the three-days-per-week in office rule.

Credits to Graeme Sloan/Sipa USA via AP Images

Legal

American Bar Association Issues Formal Opinion on Use of Generative AI Tools (NatLawReview)

On July 29th, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.” The opinion addresses ethical concerns lawyers are required to consider when using generative AI tools in their practice. Some of those concerns include reviewing the tool’s privacy policies before inputting client’s data, receiving consent to use client’s data in these tools, communicating when and where these tools are used with the client, and if it will affect the fee charged. The opinion reinforces that generative AI should be a tool, and that lawyers should still use their own judgment to draw conclusions and deliver evidence.

Credits to Josh Edelson/AFP via Getty Images

Technology

Google’s live demo of Gemini ramps up pressure on Apple as AI reaches smartphone users (CNBC)

At the Made By Google event in California on Tuesday, product director David Citron took to the stage to show off Google’s Pixel phone and the mobile capabilities of AI assistant Gemini. Most notably, Citron gave a live demo asking Gemini to check his calendar and crossreference it with a local concert. The demo was brief and buggy but demonstrated that the features are real and ready to ship, unlike Apple’s prerecorded presentation in June. The demo is putting renewed pressure on Apple, as the two smartphone leaders race to integrate AI into their operating systems.

AI risks are everywhere - and now MIT is adding them all to one database (ZDNet)

On Wednesday, MIT’s Computer Science and Artificial Intelligence Labratory (CSAIL) launched the AI Risk Repository, a database of more than 700 documented AI risks. The database is the first of its kind and will be updated consistently as an active resource. As we continue to make regulatory, policy, or business decisions, CSAIL felt that it was important to gather this information in one easily accessible place. The researchers developed 43 risk classification frameworks by reviewing academic records and databases and speaking to experts. From those 43 frameworks, they distilled more than 700 risks categorized by cause (when or why it occurs), domain, and subdomain.

Credits to Moor Studio via Getty Images

Science

Research AI model unexpectedly modified its own code to extend runtime (ArsTechnica)

Tokyo-based AI research firm Sakana AI recently announced a new AI system called “The AI Scientist” that attempts to autonomously conduct scientific research using LLMs similar to ChatGPT. However, during testing, Sakana found that “The AI Scientist” began unexpectedly trying to modify its own code. In a blog post, the researchers wrote “Its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period." This didn’t pose any immediate risks, but highlighted the importance of not letting AI systems run unsupervised outside of isolation because they could break critical infrastructure or create malware.

Credits to Michael Buckner/Variety

Entertainment

SAG-AFTRA Strikes Groundbreaking AI Digital Voice Replica Pact With Startup Firm Narrativ (Variety)

SAG-AFTRA has struck a deal with AI startup Narrativ to use audio voice replicas in digital advertising. The union believes this will set “a new standard” for ethical use of the technology and make it easier for performers to give consent and get paid fairly. Union members can voluntarily add themselves to the database and connect to advertisers, with the ability to negotiate their own fees. “Not all members will be interested in taking advantage of the opportunities that licensing their digital voice replicas might offer, and that’s understandable. But for those who do, you now have a safe option,” said Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator.

Wyoming reporter caught using artificial intelligence to create fake quotes and stories (AP News)

Aaron Pelczar, a 40-year-old amateur journalist, admitted to using AI in his stories before resigning from the Cody Enterprise. The truth was discovered by CJ Baker, a reporter from competing news outlet Powell Tribune. Baker noticed that the quotes in Pelczar stories sounded robotic and slightly uncharacteristic of the speakers. This was followed by an article about a parade that concluded with an unprompted explanation of the inverted pyramid style of journalism: “This structure ensures that the most critical information is presented first, making it easier for readers to grasp the main points quickly.” CJ Baker met with Aaron Pelczar who confirmed Baker’s suspicions. The publisher and editor at the Enterprise has since apologized and vowed to take steps to prevent future incidents.


Overheard from investors on the state of AI investing:

“Al is New Coke and the entire stock market is Coca Cola Corp facing having made a massively bad investment.”


Previous
Previous

Newsletter

Next
Next

Newsletter